Sample records for lvdt linear variable

  1. Method and apparatus for calibrating a linear variable differential transformer

    DOEpatents

    Pokrywka, Robert J [North Huntingdon, PA

    2005-01-18

    A calibration apparatus for calibrating a linear variable differential transformer (LVDT) having an armature positioned in au LVDT armature orifice, and the armature able to move along an axis of movement. The calibration apparatus includes a heating mechanism with an internal chamber, a temperature measuring mechanism for measuring the temperature of the LVDT, a fixture mechanism with an internal chamber for at least partially accepting the LVDT and for securing the LVDT within the heating mechanism internal chamber, a moving mechanism for moving the armature, a position measurement mechanism for measuring the position of the armature, and an output voltage measurement mechanism. A method for calibrating an LVDT, including the steps of: powering the LVDT; heating the LVDT to a desired temperature; measuring the position of the armature with respect to the armature orifice; and measuring the output voltage of the LVDT.

  2. State Estimation for Humanoid Robots

    DTIC Science & Technology

    2015-07-01

    21 2.2.1 Linear Inverted Pendulum Model . . . . . . . . . . . . . . . . . . . 21 2.2.2 Planar Five-link Model...Linear Inverted Pendulum Model. LVDT Linear Variable Differential Transformers. MEMS Microelectromechanical Systems. MHE Moving Horizon Estimator. QP...

  3. Improvements In Ball-Screw Linear Actuators

    NASA Technical Reports Server (NTRS)

    Iskenderian, Theodore; Joffe, Benjamin; Summers, Robert

    1996-01-01

    Report describes modifications of design of type of ball-screw linear actuator driven by dc motor, with linear-displacement feedback via linear variable-differential transformer (LVDT). Actuators used to position spacecraft engines to direct thrust. Modifications directed toward ensuring reliable and predictable operation during planned 12-year cruise and interval of hard use at end of cruise.

  4. Evaluation of prestress cable strain in multiple beam configurations.

    DOT National Transportation Integrated Search

    1996-08-01

    A system to measure prestress cable strain was fabricated, software written, and the unit calibrated. Strain measurements were made by attaching four Linear Variable Differential Transformers (LVDT) to prestress cable before they were stressed.

  5. Gage for measuring displacements in rock samples

    DOEpatents

    Holcomb, D.J.; McNamee, M.J.

    1985-07-18

    A gage for measuring diametral displacement within a rock sample for use in a rock mechanics laboratory and in the field, comprises a support ring housing a linear variable differential transformer (LVDT), a mounting screw, and a leaf spring. The mounting screw is adjustable and defines a first point of contact with the rock sample. The leaf spring has opposite ends fixed to the inner periphery of the mounting ring. An intermediate portion of the leaf spring projecting radially inward from the ring is formed with a dimple defining a second point of contact with the sample. The first and second points of contact are diametrically opposed to each other. The LVDT is mounted in the ring with its axis parallel to the line of measurement and its core rod received in the dimple of the leaf spring. Any change in the length of the line between the first and second support points is directly communicated to the LVDT. The leaf spring is rigid to completely support lateral forces so that the LVDT is free of all load for improved precision.

  6. Electronic skewing circuit monitors exact position of object underwater

    NASA Technical Reports Server (NTRS)

    Roller, R.; Yaroshuk, N.

    1967-01-01

    Linear Variable Differential Transformer /LVDT/ electronic skewing circuit guides a long cylindrical capsule underwater into a larger tube so that it does not contact the tube wall. This device detects movement of the capsule from a reference point and provides a continuous signal that is monitored on an oscilloscope.

  7. Tool setting device

    DOEpatents

    Brown, Raymond J.

    1977-01-01

    The present invention relates to a tool setting device for use with numerically controlled machine tools, such as lathes and milling machines. A reference position of the machine tool relative to the workpiece along both the X and Y axes is utilized by the control circuit for driving the tool through its program. This reference position is determined for both axes by displacing a single linear variable displacement transducer (LVDT) with the machine tool through a T-shaped pivotal bar. The use of the T-shaped bar allows the cutting tool to be moved sequentially in the X or Y direction for indicating the actual position of the machine tool relative to the predetermined desired position in the numerical control circuit by using a single LVDT.

  8. Gage for measuring displacements in rock samples

    DOEpatents

    Holcomb, David J.; McNamee, Michael J.

    1986-01-01

    A gage for measuring diametral displacement within a rock sample for use in a rock mechanics laboratory and in the field, comprises a support ring housing a linear variable differential transformer, a mounting screw, and a leaf spring. The mounting screw is adjustable and defines a first point of contact with the rock sample. The leaf spring has opposite ends fixed to the inner periphery of the mounting ring. An intermediate portion of the leaf spring projecting radially inward from the ring is formed with a dimple defining a second point of contact with the sample. The first and second points of contact are diametrically opposed to each other. The LVDT is mounted in the ring with its axis parallel to the line of measurement and its core rod received in the dimple of the leaf spring. Any change in the length of the line between the first and second support points is directly communicated to the LVDT. The leaf spring is rigid to completely support lateral forces so that the LVDT is free of all load for improved precision.

  9. Material test machine for tension-compression tests at high temperature

    DOEpatents

    Cioletti, Olisse C.

    1988-01-01

    Apparatus providing a device for testing the properties of material specimens at high temperatures and pressures in controlled water chemistries includes, inter alia, an autoclave housing the specimen which is being tested. The specimen is connected to a pull rod which couples out of the autoclave to an external assembly which includes one or more transducers, a force balance chamber and a piston type actuator. The pull rod feeds through the force balance chamber and is compensated thereby for the pressure conditions existing within the autoclave and tending to eject the pull rod therefrom. The upper end of the push rod is connected to the actuator through elements containing a transducer comprising a linear variable differential transformer (LVDT). The housing and coil assembly of the LVDT is coupled to a tube which runs through a central bore of the pull rod into the autoclave where it is connected to one side of the specimen. The movable core of the LVDT is coupled to a stem which runs through the tube where it is then connected to the other side of the specimen through a coupling member. A transducer in the form of a load cell including one or more strain gages is located on a necked-down portion of the upper part of the pull rod intermediate the LVDT and force balance chamber.

  10. Advanced Control Systems for Aircraft Powerplants

    DTIC Science & Technology

    1980-02-01

    production of high- integrity software. 1.0 INTRODUCTION Work on full-authority digital control for gas turbines was started at Rolls- Royce Limited... INTRODUCTION In order to fully understand the operation of the Secondary Power System Control Unit - abbreviated SPSCU - we must first take a close look at...Only Memory EPROM -- Erasable Read Only Memory PLA -- Power Lever Angle LVDT -- Linear Variable Differential Transformer INTRODUCTION Preliminary design

  11. Performance Characterization of a Novel Plasma Thruster to Provide a Revolutionary Operationally Responsive Space Capability with Micro- and Nano-Satellites

    DTIC Science & Technology

    2011-03-24

    and radiation resistance of rare earth permanent magnets for applications such as ion thrusters and high efficiency Stirling Radioisotope Generators...from Electron Transitioning Discharge Current Discharge Power Discharge Voltage Θ Divergence Angle Earths Gravity at Sea Level...Hall effect thruster HIVAC High Voltage Hall Accelerator LEO Low Earth Orbit LDS Laser Displacement System LVDT Linear variable differential

  12. Apparatus Tests Peeling Of Bonded Rubbery Material

    NASA Technical Reports Server (NTRS)

    Crook, Russell A.; Graham, Robert

    1996-01-01

    Instrumented hydraulic constrained blister-peel apparatus obtains data on degree of bonding between specimen of rubbery material and rigid plate. Growth of blister tracked by video camera, digital clock, pressure transducer, and piston-displacement sensor. Cylinder pressure controlled by hydraulic actuator system. Linear variable-differential transformer (LVDT) and float provide second, independent measure of change in blister volume used as more precise volume feedback in low-growth-rate test.

  13. Numerical and analytical investigation of steel beam subjected to four-point bending

    NASA Astrophysics Data System (ADS)

    Farida, F. M.; Surahman, A.; Sofwan, A.

    2018-03-01

    A One type of bending tests is four-point bending test. The aim of this test is to investigate the properties and behavior of materials with structural applications. This study uses numerical and analytical studies. Results from both of these studies help to improve in experimental works. The purpose of this study is to predict steel beam behavior subjected to four-point bending test. This study intension is to analyze flexural beam subjected to four-point bending prior to experimental work. Main results of this research are location of strain gauge and LVDT on steel beam based on numerical study, manual calculation, and analytical study. Analytical study uses linear elasticity theory of solid objects. This study results is position of strain gauge and LVDT. Strain gauge is located between two concentrated loads at the top beam and bottom beam. LVDT is located between two concentrated loads.

  14. Icing research tunnel rotating bar calibration measurement system

    NASA Technical Reports Server (NTRS)

    Gibson, Theresa L.; Dearmon, John M.

    1993-01-01

    In order to measure icing patterns across a test section of the Icing Research Tunnel, an automated rotating bar measurement system was developed at the NASA Lewis Research Center. In comparison with the previously used manual measurement system, this system provides a number of improvements: increased accuracy and repeatability, increased number of data points, reduced tunnel operating time, and improved documentation. The automated system uses a linear variable differential transformer (LVDT) to measure ice accretion. This instrument is driven along the bar by means of an intelligent stepper motor which also controls data recording. This paper describes the rotating bar calibration measurement system.

  15. Optical Measurement Technique for Space Column Characterization

    NASA Technical Reports Server (NTRS)

    Barrows, Danny A.; Watson, Judith J.; Burner, Alpheus W.; Phelps, James E.

    2004-01-01

    A simple optical technique for the structural characterization of lightweight space columns is presented. The technique is useful for determining the coefficient of thermal expansion during cool down as well as the induced strain during tension and compression testing. The technique is based upon object-to-image plane scaling and does not require any photogrammetric calibrations or computations. Examples of the measurement of the coefficient of thermal expansion are presented for several lightweight space columns. Examples of strain measured during tension and compression testing are presented along with comparisons to results obtained with Linear Variable Differential Transformer (LVDT) position transducers.

  16. Development of non-conventional instrument transformers (NCIT) using smart materials

    NASA Astrophysics Data System (ADS)

    Nikolić, Bojan; Khan, Sanowar; Gabdullin, Nikita

    2016-11-01

    In this paper is presented a novel approach for current measurement using smart materials, magnetic shape memory (MSM) alloys. Their shape change can be controlled by the application of magnetic field or mechanical stress. This gives the possibility to measure currents by correlating the magnetic field produced by the current, shape change in an MSM- based sensor and the voltage output of a Linear Variable Differential Transducer (LVDT) actuated by this shape change. In the first part of the paper is presented a review of existing current measurement sensors by comparing their properties and highlighting their advantages and disadvantages.

  17. Use of the total station for load testing of retrofitted bridges with limited access

    NASA Astrophysics Data System (ADS)

    Merkle, Wesley J.; Myers, John J.

    2004-07-01

    As new technologies are increasingly applied to civil infrastructure, the need for structural monitoring systems becomes more critical. Serviceability, or deflection, is very important in monitoring the health of not only a structural system, but also in analyzing the affects of a new technology applied in the field. Traditionally, Linear Variable Displacement Transducers (LVDT"s) are used to measure deflection in many filed load tests. In the field, access can easily become an issue with this instrumentation system that is truly designed for laboratory use. LVDT instrumentation for load testing typically requires several labor intensive hours to prepare for a load test in the field; the system is accompanied by wiring and expensive electronics that may not only become a safety issue but is also very sensitive to the elements. Set up is especially difficult, if not impossible, on tall bridge spans and bridge spans over water. A recent research project required serviceability monitoring through a series of load tests for several retrofitted bridges in Missouri. For these tests, surveying equipment was employed in attempt to make serviceability measurement more practicable. Until recently, surveying equipment would not have produced the accuracy required for structural monitoring use; however, manufacturers of this equipment have developed new technologies to increase the accuracy of the instrumentation. The major component used, the total station, can measure deflection accurate to 0.2 millimeters (0.0079 in.). This monitoring system is much easier to set up and use, reducing labor and time requirements. The system has almost no site restrictions. This paper will compare and contrast the total station to traditional load testing monitoring equipment (LVDT).

  18. Hand-Held Electronic Gap-Measuring Tools

    NASA Technical Reports Server (NTRS)

    Sugg, F. E.; Thompson, F. W.; Aragon, L. A.; Harrington, D. B.

    1985-01-01

    Repetitive measurements simplified by tool based on LVDT operation. With fingers in open position, Gap-measuring tool rests on digital readout instrument. With fingers inserted in gap, separation alters inductance of linear variable-differential transformer in plastic handle. Originally developed for measuring gaps between surface tiles of Space Shuttle orbiter, tool reduces measurement time from 20 minutes per tile to 2 minutes. Also reduces possibility of damage to tiles during measurement. Tool has potential applications in mass production; helps ensure proper gap dimensions in assembly of refrigerator and car doors and also used to measure dimensions of components and to verify positional accuracy of components during progressive assembly operations.

  19. Tool calibration system for micromachining system

    DOEpatents

    Miller, Donald M.

    1979-03-06

    A tool calibration system including a tool calibration fixture and a tool height and offset calibration insert for calibrating the position of a tool bit in a micromachining tool system. The tool calibration fixture comprises a yokelike structure having a triangular head, a cavity in the triangular head, and a port which communicates a side of the triangular head with the cavity. Yoke arms integral with the triangular head extend along each side of a tool bar and a tool head of the micromachining tool system. The yoke arms are secured to the tool bar to place the cavity around a tool bit which may be mounted to the end of the tool head. Three linear variable differential transformer's (LVDT) are adjustably mounted in the triangular head along an X axis, a Y axis, and a Z axis. The calibration insert comprises a main base which can be mounted in the tool head of the micromachining tool system in place of a tool holder and a reference projection extending from a front surface of the main base. Reference surfaces of the calibration insert and a reference surface on a tool bar standard length are used to set the three LVDT's of the calibration fixture to the tool reference position. These positions are transferred permanently to a mastering station. The tool calibration fixture is then used to transfer the tool reference position of the mastering station to the tool bit.

  20. An Experimental Study of a Stitched Composite with a Notch Subjected to Combined Bending and Tension Loading

    NASA Technical Reports Server (NTRS)

    Palmer, Susan O.; Nettles, Alan T.; Poe, C. C., Jr.

    1999-01-01

    A series of tests was conducted to measure the strength of stitched carbon/epoxy composites containing through-thickness damage in the form of a crack-like notch. The specimens were subjected to three types of loading: pure bending, pure tension, and combined bending and tension loads. Measurements of applied loads, strains near crack tips, and crack opening displacements (COD) were monitored in all tests. The transverse displacement at the center of the specimen was measured using a Linear Variable Differential Transformer (LVDT). The experimental data showed that the outer surface of the pure tension specimen failed at approximately 6,000 microstrain, while in combined bending and tension loads the measured tensile strains reached 10,000 microstrain.

  1. Results of Accelerated Life Testing of LCLS-II Cavity Tuner Motor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huque, Naeem; Daly, Edward; Pischalnikov, Yuriy

    An Accelerated Life Test (ALT) of the Phytron stepper motor used in the LCLS-II cavity tuner has been conducted at JLab. Since the motor will reside inside the cryomodule, any failure would lead to a very costly and arduous repair. As such, the motor was tested for the equivalent of 30 lifetimes before being approved for use in the production cryomodules. The 9-cell LCLS-II cavity is simulated by disc springs with an equivalent spring constant. Plots of the motor position vs. tuner position ' measured via an installed linear variable differential transformer (LVDT) ' are used to measure motor motion.more » The titanium spindle was inspected for loss of lubrication. The motor passed the ALT, and is set to be installed in the LCLS-II cryomodules.« less

  2. RESULTS OF ACCELERATED LIFE TESTING OF LCLS-II CAVITY TUNER MOTOR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huque, Naeem; Daly, Edward F.; Pischalnikov, Yuriy

    An Accelerated Life Test (ALT) of the Phytron stepper motor used in the LCLS-II cavity tuner has been conducted at JLab. Since the motor will reside inside the cryomodule, any failure would lead to a very costly and arduous repair. As such, the motor was tested for the equivalent of 30 lifetimes before being approved for use in the production cryomodules. The 9-cell LCLS-II cavity is simulated by disc springs with an equivalent spring constant. Plots of the motor position vs. tuner position ' measured via an installed linear variable differential transformer (LVDT) ' are used to measure motor motion.more » The titanium spindle was inspected for loss of lubrication. The motor passed the ALT, and is set to be installed in the LCLS-II cryomodules.« less

  3. FY 2016 Status Report: CIRFT Testing on Spent Nuclear Fuels and Hydride Reorientation Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Jy-An John; Wang, Hong; Yan, Yong

    This report provides a detailed description of the Cyclic Integrated Reversible-Bending Fatigue Tester (CIRFT) testing conducted on spent nuclear fuel (SNF) rods in FY 2016, including hydride reorientation test results. Contact-based measurement, or three-LVDT-based curvature measurement, of SNF rods has proven to be quite reliable in CIRFT testing. However, how the linear variable differential transformer (LVDT) head contacts the SNF rod may have a significant effect on the curvature measurement, depending on the magnitude and direction of rod curvature. To correct such contact/curvature issues, sensor spacing, defined as the amount of separation between the three LVDT probes, is a criticalmore » measurement that can be used to calculate rod curvature once the deflections are obtained. Recently developed CIRFT data analyses procedures were integrated into FY 2016 CIRFT testing results for the curvature measurements. The variations in fatigue life are provided in terms of moment, equivalent stress, curvature, and equivalent strain for the tested SNFs. The equivalent stress plot collapsed the data points from all of the SNFs into a single zone. A detailed examination revealed that, at same stress level, fatigue lives display a descending order as follows: H. B. Robinson Nuclear Power Station (HBR), Limerick Nuclear Power Station (LMK), mixed uranium-plutonium oxide (MOX). If looking at the strain, then LMK fuel has a slightly longer fatigue life than HBR fuel, but the difference is subtle. The knee point of endurance limit in the curve of moment and curvature or equivalent quantities is more clearly defined for LMK and HBR fuels. The treatment affects the fatigue life of specimens. Both a drop of 12 in. and radial hydride treatment (RHT) have a negative impact on fatigue life. The effect of thermal annealing on MOX fuel rods was relatively small at higher amplitude but became significant at low amplitude of moment. Thermal annealing tended to extend the fatigue life of MOX fuel rod specimens. However, for HR4 testing, the thermal annealing treatment showed a negative impact on the fatigue life of the HBR rod.« less

  4. OSM-Classic : An optical imaging technique for accurately determining strain

    NASA Astrophysics Data System (ADS)

    Aldrich, Daniel R.; Ayranci, Cagri; Nobes, David S.

    OSM-Classic is a program designed in MATLAB® to provide a method of accurately determining strain in a test sample using an optical imaging technique. Measuring strain for the mechanical characterization of materials is most commonly performed with extensometers, LVDT (linear variable differential transistors), and strain gauges; however, these strain measurement methods suffer from their fragile nature and it is not particularly easy to attach these devices to the material for testing. To alleviate these potential problems, an optical approach that does not require contact with the specimen can be implemented to measure the strain. OSM-Classic is a software that interrogates a series of images to determine elongation in a test sample and hence, strain of the specimen. It was designed to provide a graphical user interface that includes image processing with a dynamic region of interest. Additionally, the stain is calculated directly while providing active feedback during the processing.

  5. FY 2016 Status Report: CIRFT Testing Data Analyses and Updated Curvature Measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Jy-An John; Wang, Hong

    This report provides a detailed description of FY15 test result corrections/analysis based on the FY16 Cyclic Integrated Reversible-Bending Fatigue Tester (CIRFT) test program methodology update used to evaluate the vibration integrity of spent nuclear fuel (SNF) under normal transportation conditions. The CIRFT consists of a U-frame testing setup and a real-time curvature measurement method. The three-component U-frame setup of the CIRFT has two rigid arms and linkages to a universal testing machine. The curvature of rod bending is obtained through a three-point deflection measurement method. Three linear variable differential transformers (LVDTs) are used and clamped to the side connecting platesmore » of the U-frame to capture the deformation of the rod. The contact-based measurement, or three-LVDT-based curvature measurement system, on SNF rods has been proven to be quite reliable in CIRFT testing. However, how the LVDT head contacts the SNF rod may have a significant effect on the curvature measurement, depending on the magnitude and direction of rod curvature. It has been demonstrated that the contact/curvature issues can be corrected by using a correction on the sensor spacing. The sensor spacing defines the separation of the three LVDT probes and is a critical quantity in calculating the rod curvature once the deflections are obtained. The sensor spacing correction can be determined by using chisel-type probes. The method has been critically examined this year and has been shown to be difficult to implement in a hot cell environment, and thus cannot be implemented effectively. A correction based on the proposed equivalent gauge-length has the required flexibility and accuracy and can be appropriately used as a correction factor. The correction method based on the equivalent gauge length has been successfully demonstrated in CIRFT data analysis for the dynamic tests conducted on Limerick (LMK) (17 tests), North Anna (NA) (6 tests), and Catawba mixed oxide (MOX) (10 tests) SNF samples. These CIRFT tests were completed in FY14 and FY15. Specifically, the data sets obtained from measurement and monitoring were processed and analyzed. The fatigue life of rods has been characterized in terms of moment, curvature, and equivalent stress and strain..« less

  6. Effect of dimethyl sulfoxide on dentin collagen.

    PubMed

    Mehtälä, P; Pashley, D H; Tjäderhane, L

    2017-08-01

    Infiltration of adhesive on dentin matrix depends on interaction of surface and adhesive. Interaction depends on dentin wettability, which can be enhanced either by increasing dentin surface energy or lowering the surface energy of adhesive. The objective was to examine the effect of dimethyl sulfoxide (DMSO) on demineralized dentin wettability and dentin organic matrix expansion. Acid-etched human dentin was used for sessile drop contact angle measurement to test surface wetting on 1-5% DMSO-treated demineralized dentin surface, and linear variable differential transformer (LVDT) to measure expansion/shrinkage of dentinal matrix. DMSO-water binary liquids were examined for surface tension changes through concentrations from 0 to 100% DMSO. Kruskal-Wallis and Mann-Whitney tests were used to test the differences in dentin wettability, expansion and shrinkage, and Spearman test to test the correlation between DMSO concentration and water surface tension. The level of significance was p<0.05. Pretreatment with 1-5% DMSO caused statistically significant concentration-dependent increase in wetting: the immediate contact angles decreased by 11.8% and 46.6% and 60s contact angles by 9.5% and 47.4% with 1% and 5% DMSO, respectively. DMSO-water mixtures concentration-dependently expanded demineralized dentin samples less than pure water, except with high (≥80%) DMSO concentrations which expanded demineralized dentin more than water. Drying times of LVDT samples increased significantly with the use of DMSO. Increased dentin wettability may explain the previously demonstrated increase in adhesive penetration with DMSO-treated dentin, and together with the expansion of collagen matrix after drying may also explain previously observed increase in dentin adhesive bonding. Copyright © 2017 The Academy of Dental Materials. Published by Elsevier Ltd. All rights reserved.

  7. Air Vehicle Integration and Technology Research (AVIATR). Delivery Order 0013: Nonlinear, Low-Order/Reduced-Order Modeling Applications and Demonstration

    DTIC Science & Technology

    2011-12-01

    image) ................. 114 Figure 156 – Abaqus thermal model attempting to characterize the thermal profile seen in the test data...optimization process ... 118 Figure 159 – Thermal profile for optimized Abaqus thermal solution ....................................... 119 Figure 160 – LVDT...Coefficients of thermal expansion results ................................................................. 121 Table 12 – LVDT correlation results

  8. Irradiation creep and precipitation in a ferritic ODS steel under helium implantation

    NASA Astrophysics Data System (ADS)

    Chen, J.; Jung, P.; Pouchon, M. A.; Rebac, T.; Hoffelner, W.

    2008-02-01

    Ferritic oxide dispersion strengthened (ODS) steel, PM2000, has been homogeneously implanted with helium under uniaxial tensile stresses from 20 to 250 MPa to maximum doses of about 0.75 dpa (3000 ppm He) with displacement damage rates of 5.5 × 10 -6 dpa/s at temperatures of 573, 673 and 773 K. Straining of a miniaturized dog-bone specimen under helium implantation was monitored by linear variable displacement transformer (LVDT) and meanwhile by their resistance also measured by four-pole technique. Creep compliance was almost constant at 5.7 × 10 -6 dpa -1 MPa -1 for temperatures below 673 K and increased to 18 × 10 -6 dpa -1 MPa -1 at 773 K. The resistivity of PM2000 samples decreased with dose and showed a tendency to saturation. Subsequent transmission electron microscopy observations indicated the formation of ordered Fe 3- xCr xAl precipitates during implantation. Correlations between the microstructure and resistivity are discussed.

  9. Design and analysis of a novel mechanical loading machine for dynamic in vivo axial loading

    NASA Astrophysics Data System (ADS)

    Macione, James; Nesbitt, Sterling; Pandit, Vaibhav; Kotha, Shiva

    2012-02-01

    This paper describes the construction of a loading machine for performing in vivo, dynamic mechanical loading of the rodent forearm. The loading machine utilizes a unique type of electromagnetic actuator with no mechanically resistive components (servotube), allowing highly accurate loads to be created. A regression analysis of the force created by the actuator with respect to the input voltage demonstrates high linear correlation (R2 = 1). When the linear correlation is used to create dynamic loading waveforms in the frequency (0.5-10 Hz) and load (1-50 N) range used for in vivo loading, less than 1% normalized root mean square error (NRMSE) is computed. Larger NRMSE is found at increased frequencies, with 5%-8% occurring at 40 Hz, and reasons are discussed. Amplifiers (strain gauge, linear voltage displacement transducer (LVDT), and load cell) are constructed, calibrated, and integrated, to allow well-resolved dynamic measurements to be recorded at each program cycle. Each of the amplifiers uses an active filter with cutoff frequency at the maximum in vivo loading frequencies (50 Hz) so that electronic noise generated by the servo drive and actuator are reduced. The LVDT and load cell amplifiers allow evaluation of stress-strain relationships to determine if in vivo bone damage is occurring. The strain gauge amplifier allows dynamic force to strain calibrations to occur for animals of different sex, age, and strain. Unique features are integrated into the loading system, including a weightless mode, which allows the limbs of anesthetized animals to be quickly positioned and removed. Although the device is constructed for in vivo axial bone loading, it can be used within constraints, as a general measurement instrument in a laboratory setting.

  10. Design and analysis of a novel mechanical loading machine for dynamic in vivo axial loading.

    PubMed

    Macione, James; Nesbitt, Sterling; Pandit, Vaibhav; Kotha, Shiva

    2012-02-01

    This paper describes the construction of a loading machine for performing in vivo, dynamic mechanical loading of the rodent forearm. The loading machine utilizes a unique type of electromagnetic actuator with no mechanically resistive components (servotube), allowing highly accurate loads to be created. A regression analysis of the force created by the actuator with respect to the input voltage demonstrates high linear correlation (R(2) = 1). When the linear correlation is used to create dynamic loading waveforms in the frequency (0.5-10 Hz) and load (1-50 N) range used for in vivo loading, less than 1% normalized root mean square error (NRMSE) is computed. Larger NRMSE is found at increased frequencies, with 5%-8% occurring at 40 Hz, and reasons are discussed. Amplifiers (strain gauge, linear voltage displacement transducer (LVDT), and load cell) are constructed, calibrated, and integrated, to allow well-resolved dynamic measurements to be recorded at each program cycle. Each of the amplifiers uses an active filter with cutoff frequency at the maximum in vivo loading frequencies (50 Hz) so that electronic noise generated by the servo drive and actuator are reduced. The LVDT and load cell amplifiers allow evaluation of stress-strain relationships to determine if in vivo bone damage is occurring. The strain gauge amplifier allows dynamic force to strain calibrations to occur for animals of different sex, age, and strain. Unique features are integrated into the loading system, including a weightless mode, which allows the limbs of anesthetized animals to be quickly positioned and removed. Although the device is constructed for in vivo axial bone loading, it can be used within constraints, as a general measurement instrument in a laboratory setting.

  11. Scientific Research Program for Power, Energy, and Thermal Technologies. Task Order 0001: Energy, Power, and Thermal Technologies and Processes Experimental Research. Subtask: Thermal Management of Electromechanical Actuation System for Aircraft Primary Flight Control Surfaces

    DTIC Science & Technology

    2014-05-01

    utilizing buoyancy differences in vapor and liquid phases to pump the heat transfer fluid between the evaporator and condenser. In this particular...Virtual Instrumentation Engineering Workbench LHP Loop Heat Pipe LVDT Linear Voltage Displacement Transducer MACE Micro -technologies for Air...Bland 1992). This type of duty cycle lends itself to thermal energy storage, which when coupled with an effective heat transfer mechanism can

  12. The effect of load position to the accuracy of deflection measured with LVDT sensor in I-girder bridge

    NASA Astrophysics Data System (ADS)

    Hidayat, Irpan; Suangga, Made; Reshki Maulana, Moh

    2017-12-01

    Serviceability of a bridge will decrease based on the function of time. Most likely due to the cyclic load from the traffic. The indicators which can be measured to determine the serviceability is the deflection of the girder. In this research, the PCI-Girder and vehicle load are analyzed by using the finite element method (Midas/Civil) Program. For comparison, the running vehicle test to the bridge has been conducted where the bridge deflections are measured using LVDT sensors on PCI-Girder Bridge. To find the effect of vehicle distance to the LVDV position, the running vehicle goes through on several lanes. The finite element program (Midas/Civil) gives relatively similar result to the measured deflection using LVDT sensors. However, when the vehicle load is situated far from the sensor, the result from both analysis showed significant differences.

  13. Computer Vision-Based Structural Displacement Measurement Robust to Light-Induced Image Degradation for In-Service Bridges

    PubMed Central

    Lee, Junhwa; Lee, Kyoung-Chan; Cho, Soojin

    2017-01-01

    The displacement responses of a civil engineering structure can provide important information regarding structural behaviors that help in assessing safety and serviceability. A displacement measurement using conventional devices, such as the linear variable differential transformer (LVDT), is challenging owing to issues related to inconvenient sensor installation that often requires additional temporary structures. A promising alternative is offered by computer vision, which typically provides a low-cost and non-contact displacement measurement that converts the movement of an object, mostly an attached marker, in the captured images into structural displacement. However, there is limited research on addressing light-induced measurement error caused by the inevitable sunlight in field-testing conditions. This study presents a computer vision-based displacement measurement approach tailored to a field-testing environment with enhanced robustness to strong sunlight. An image-processing algorithm with an adaptive region-of-interest (ROI) is proposed to reliably determine a marker’s location even when the marker is indistinct due to unfavorable light. The performance of the proposed system is experimentally validated in both laboratory-scale and field experiments. PMID:29019950

  14. Determination of the continuous cooling transformation diagram of a high strength low alloyed steel

    NASA Astrophysics Data System (ADS)

    Kang, Hun Chul; Park, Bong June; Jang, Ji Hun; Jang, Kwang Soon; Lee, Kyung Jong

    2016-11-01

    The continuous cooling transformation diagram of a high strength low alloyed steel was determined by a dilatometer and microscopic analysis (OM, SEM) as well as thermodynamic analysis. As expected, Widmanstätten ferrite, bainite and martensite coexisted for most cooling rates, which made it difficult to determine the transformation kinetics of individual phases. However, peaks were clearly observed in the dilatometric {d( {LVDT} )}/{dT} curves. By overlapping the {d( {LVDT} )}/{dT} curves, which were determined using various cooling rates, peaks were separated and the peak rate temperatures, as well as the temperature at the start of transformation (5%) and the end of transformation (95%) of an individual phase, were determined. A SEM analysis was also conducted to identify which phase existed and to quantify the volume fraction of each phase. It was confirmed that the additional {d( {LVDT} )}/{dT} curve analysis described the transformation behavior more precisely than the conventional continuous cooling transformation diagram, as determined by the volume measured from the microstructure analysis.

  15. Telescoping magnetic ball bar test gage

    DOEpatents

    Bryan, J.B.

    1982-03-15

    A telescoping magnetic ball bar test gage for determining the accuracy of machine tools, including robots, and those measuring machines having non-disengagable servo drives which cannot be clutched out. Two gage balls are held and separated from one another by a telescoping fixture which allows them relative radial motional freedom but not relative lateral motional freedom. The telescoping fixture comprises a parallel reed flexure unit and a rigid member. One gage ball is secured by a magnetic socket knuckle assembly which fixes its center with respect to the machine being tested. The other gage ball is secured by another magnetic socket knuckle assembly which is engaged or held by the machine in such manner that the center of that ball is directed to execute a prescribed trajectory, all points of which are equidistant from the center of the fixed gage ball. As the moving ball executes its trajectory, changes in the radial distance between the centers of the two balls caused by inaccuracies in the machine are determined or measured by a linear variable differential transformer (LVDT) assembly actuated by the parallel reed flexure unit. Measurements can be quickly and easily taken for multiple trajectories about several different fixed ball locations, thereby determining the accuracy of the machine.

  16. Detecting Solenoid Valve Deterioration in In-Use Electronic Diesel Fuel Injection Control Systems

    PubMed Central

    Tsai, Hsun-Heng; Tseng, Chyuan-Yow

    2010-01-01

    The diesel engine is the main power source for most agricultural vehicles. The control of diesel engine emissions is an important global issue. Fuel injection control systems directly affect fuel efficiency and emissions of diesel engines. Deterioration faults, such as rack deformation, solenoid valve failure, and rack-travel sensor malfunction, are possibly in the fuel injection module of electronic diesel control (EDC) systems. Among these faults, solenoid valve failure is most likely to occur for in-use diesel engines. According to the previous studies, this failure is a result of the wear of the plunger and sleeve, based on a long period of usage, lubricant degradation, or engine overheating. Due to the difficulty in identifying solenoid valve deterioration, this study focuses on developing a sensor identification algorithm that can clearly classify the usability of the solenoid valve, without disassembling the fuel pump of an EDC system for in-use agricultural vehicles. A diagnostic algorithm is proposed, including a feedback controller, a parameter identifier, a linear variable differential transformer (LVDT) sensor, and a neural network classifier. Experimental results show that the proposed algorithm can accurately identify the usability of solenoid valves. PMID:22163597

  17. Detecting solenoid valve deterioration in in-use electronic diesel fuel injection control systems.

    PubMed

    Tsai, Hsun-Heng; Tseng, Chyuan-Yow

    2010-01-01

    The diesel engine is the main power source for most agricultural vehicles. The control of diesel engine emissions is an important global issue. Fuel injection control systems directly affect fuel efficiency and emissions of diesel engines. Deterioration faults, such as rack deformation, solenoid valve failure, and rack-travel sensor malfunction, are possibly in the fuel injection module of electronic diesel control (EDC) systems. Among these faults, solenoid valve failure is most likely to occur for in-use diesel engines. According to the previous studies, this failure is a result of the wear of the plunger and sleeve, based on a long period of usage, lubricant degradation, or engine overheating. Due to the difficulty in identifying solenoid valve deterioration, this study focuses on developing a sensor identification algorithm that can clearly classify the usability of the solenoid valve, without disassembling the fuel pump of an EDC system for in-use agricultural vehicles. A diagnostic algorithm is proposed, including a feedback controller, a parameter identifier, a linear variable differential transformer (LVDT) sensor, and a neural network classifier. Experimental results show that the proposed algorithm can accurately identify the usability of solenoid valves.

  18. Normalized Rotational Multiple Yield Surface Framework (NRMYSF) stress-strain curve prediction method based on small strain triaxial test data on undisturbed Auckland residual clay soils

    NASA Astrophysics Data System (ADS)

    Noor, M. J. Md; Ibrahim, A.; Rahman, A. S. A.

    2018-04-01

    Small strain triaxial test measurement is considered to be significantly accurate compared to the external strain measurement using conventional method due to systematic errors normally associated with the test. Three submersible miniature linear variable differential transducer (LVDT) mounted on yokes which clamped directly onto the soil sample at equally 120° from the others. The device setup using 0.4 N resolution load cell and 16 bit AD converter was capable of consistently resolving displacement of less than 1µm and measuring axial strains ranging from less than 0.001% to 2.5%. Further analysis of small strain local measurement data was performed using new Normalized Multiple Yield Surface Framework (NRMYSF) method and compared with existing Rotational Multiple Yield Surface Framework (RMYSF) prediction method. The prediction of shear strength based on combined intrinsic curvilinear shear strength envelope using small strain triaxial test data confirmed the significant improvement and reliability of the measurement and analysis methods. Moreover, the NRMYSF method shows an excellent data prediction and significant improvement toward more reliable prediction of soil strength that can reduce the cost and time of experimental laboratory test.

  19. LLNL/Lion Precision LVDT amplifier

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hopkins, D.J.

    1994-04-01

    A high-precision, low-noise, LVDT amplifier has been developed which is a significant advancement on the current state of the art in contact displacement measurement. This amplifier offers the dynamic range of a typical LVDT probe but with a resolution that rivals that of non contact displacement measuring systems such as capacitance gauges and laser interferometers. Resolution of 0.1 {mu} in with 100 Hz bandwidth is possible. This level of resolution is over an order of magnitude greater than what is now commercially available. A front panel switch can reduce the bandwidth to 2.5 Hz and attain a resolution of 0.025more » {mu} in. This level of resolution meets or exceeds that of displacement measuring laser interferometry or capacitance gauge systems. Contact displacement measurement offers high part spatial resolution and therefore can measure not only part contour but surface finish. Capacitance gauges and displacement laser interferometry offer poor part spatial resolution and can not provide good surface finish measurements. Machine tool builders, meteorologists and quality inspection departments can immediately utilize the higher accuracy and capabilities that this amplifier offers. The precision manufacturing industry can improve as a result of improved capability to measure parts that help reduce costs and minimize material waste.« less

  20. Telescoping magnetic ball bar test gage

    DOEpatents

    Bryan, J.B.

    1984-03-13

    A telescoping magnetic ball bar test gage for determining the accuracy of machine tools, including robots, and those measuring machines having non-disengageable servo drives which cannot be clutched out is disclosed. Two gage balls are held and separated from one another by a telescoping fixture which allows them relative radial motional freedom but not relative lateral motional freedom. The telescoping fixture comprises a parallel reed flexure unit and a rigid member. One gage ball is secured by a magnetic socket knuckle assembly which fixes its center with respect to the machine being tested. The other gage ball is secured by another magnetic socket knuckle assembly which is engaged or held by the machine in such manner that the center of that ball is directed to execute a prescribed trajectory, all points of which are equidistant from the center of the fixed gage ball. As the moving ball executes its trajectory, changes in the radial distance between the centers of the two balls caused by inaccuracies in the machine are determined or measured by a linear variable differential transformer (LVDT) assembly actuated by the parallel reed flexure unit. Measurements can be quickly and easily taken for multiple trajectories about several different fixed ball locations, thereby determining the accuracy of the machine. 3 figs.

  1. Telescoping magnetic ball bar test gage

    DOEpatents

    Bryan, James B.

    1984-01-01

    A telescoping magnetic ball bar test gage for determining the accuracy of machine tools, including robots, and those measuring machines having non-disengageable servo drives which cannot be clutched out. Two gage balls (10, 12) are held and separated from one another by a telescoping fixture which allows them relative radial motional freedom but not relative lateral motional freedom. The telescoping fixture comprises a parallel reed flexure unit (14) and a rigid member (16, 18, 20, 22, 24). One gage ball (10) is secured by a magnetic socket knuckle assembly (34) which fixes its center with respect to the machine being tested. The other gage ball (12) is secured by another magnetic socket knuckle assembly (38) which is engaged or held by the machine in such manner that the center of that ball (12) is directed to execute a prescribed trajectory, all points of which are equidistant from the center of the fixed gage ball (10). As the moving ball (12) executes its trajectory, changes in the radial distance between the centers of the two balls (10, 12) caused by inaccuracies in the machine are determined or measured by a linear variable differential transformer (LVDT) assembly (50, 52, 54, 56, 58, 60) actuated by the parallel reed flexure unit (14). Measurements can be quickly and easily taken for multiple trajectories about several different fixed ball (10) locations, thereby determining the accuracy of the machine.

  2. Analysis and experimental evaluation of a Stewart platform-based force/torque sensor

    NASA Technical Reports Server (NTRS)

    Nguyen, Charles C.; Antrazi, Sami S.

    1992-01-01

    The kinematic analysis and experimentation of a force/torque sensor whose design is based on the mechanism of the Stewart Platform are discussed. Besides being used for measurement of forces/torques, the sensor also serves as a compliant platform which provides passive compliance during a robotic assembly task. It consists of two platforms, the upper compliant platform (UCP) and the lower compliant platform (LCP), coupled together through six spring-loaded pistons whose length variations are measured by six linear voltage differential transformers (LVDT) mounted along the pistons. Solutions to the forward and inverse kinematics of the force sensor are derived. Based on the known spring constant and the piston length changes, forces/torques applied to the LCP gripper are computed using vector algebra. Results of experiments conducted to evaluate the sensing capability of the force sensor are reported and discussed.

  3. Experiments with a Differential Transformer

    ERIC Educational Resources Information Center

    Aguilar, Horacio Munguía

    2016-01-01

    An experiment with an electric transformer based on single coils shows how electromagnetic induction changes when the magnetic coupling between coils is adjusted. This transformer has two secondary outputs which are taken differentially. This is the basis for a widely used position transducer known as LVDT.

  4. Design and Laboratory Evaluation of Future Elongation and Diameter Measurements at the Advanced Test Reactor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    K. L. Davis; D. L. Knudson; J. L. Rempe

    New materials are being considered for fuel, cladding, and structures in next generation and existing nuclear reactors. Such materials can undergo significant dimensional and physical changes during high temperature irradiations. In order to accurately predict these changes, real-time data must be obtained under prototypic irradiation conditions for model development and validation. To provide such data, researchers at the Idaho National Laboratory (INL) High Temperature Test Laboratory (HTTL) are developing several instrumented test rigs to obtain data real-time from specimens irradiated in well-controlled pressurized water reactor (PWR) coolant conditions in the Advanced Test Reactor (ATR). This paper reports the status ofmore » INL efforts to develop and evaluate prototype test rigs that rely on Linear Variable Differential Transformers (LVDTs) in laboratory settings. Although similar LVDT-based test rigs have been deployed in lower flux Materials Testing Reactors (MTRs), this effort is unique because it relies on robust LVDTs that can withstand higher temperatures and higher fluxes than often found in other MTR irradiations. Specifically, the test rigs are designed for detecting changes in length and diameter of specimens irradiated in ATR PWR loops. Once implemented, these test rigs will provide ATR users with unique capabilities that are sorely needed to obtain measurements such as elongation caused by thermal expansion and/or creep loading and diameter changes associated with fuel and cladding swelling, pellet-clad interaction, and crud buildup.« less

  5. A Hydraulic Blowdown Servo System For Launch Vehicle

    NASA Astrophysics Data System (ADS)

    Chen, Anping; Deng, Tao

    2016-07-01

    This paper introduced a hydraulic blowdown servo system developed for a solid launch vehicle of the family of Chinese Long March Vehicles. It's the thrust vector control (TVC) system for the first stage. This system is a cold gas blowdown hydraulic servo system and consist of gas vessel, hydraulic reservoir, servo actuator, digital control unit (DCU), electric explosion valve, and pressure regulator etc. A brief description of the main assemblies and characteristics follows. a) Gas vessel is a resin/carbon fiber composite over wrapped pressure vessel with a titanium liner, The volume of the vessel is about 30 liters. b) Hydraulic reservoir is a titanium alloy piston type reservoir with a magnetostrictive sensor as the fluid level indicator. The volume of the reservoir is about 30 liters. c) Servo actuator is a equal area linear piston actuator with a 2-stage low null leakage servo valve and a linear variable differential transducer (LVDT) feedback the piston position, Its stall force is about 120kN. d) Digital control unit (DCU) is a compact digital controller based on digital signal processor (DSP), and deployed dual redundant 1553B digital busses to communicate with the on board computer. e) Electric explosion valve is a normally closed valve to confine the high pressure helium gas. f) Pressure regulator is a spring-loaded poppet pressure valve, and regulates the gas pressure from about 60MPa to about 24MPa. g) The whole system is mounted in the aft skirt of the vehicle. h) This system delivers approximately 40kW hydraulic power, by contrast, the total mass is less than 190kg. the power mass ratio is about 0.21. Have finished the development and the system test. Bench and motor static firing tests verified that all of the performances have met the design requirements. This servo system is complaint to use of the solid launch vehicle.

  6. Fatigue Testing of Maglev-Hybrid Box Beam

    DTIC Science & Technology

    2009-03-02

    04142009 3. DATES COVERED: (From - To) 23052006-14092008 4. TITLE AND SUBTITLE Fatigue Testing of Maglev -Hybrid Box Beam 5a. CONTRACT NUMBER NA...was previously built under collaboration between Maglev Inc. and Lehigh University. The girder was instrumented with strain gages and LVDT’s to monitor...report March 2,2009 Contract N00014-06-1-0872 Project: Fatigue Testing of Maglev -Hybrid Box Beam Prepared by Dr. J.L. Grenestedt and Dr. R. Sause

  7. Infrasound Sensor Calibration and Response

    DTIC Science & Technology

    2012-09-01

    infrasound calibration chamber. Under separate funding a number of upgrades were made to the chamber. These include a Geotech Smart24 digitizer and...of upgrades were made to the chamber. These include a Geotech Smart24 digitizer and workstation, an LVDT sensor for piston phone phase measurement, a...20 samples per second on a GeoTech Instruments DL 24 digitizer. Fifty cycles of data were fit with the Matlab function NLINFIT that gave the peak

  8. Low Energy Consumption Hydraulic Techniques

    DTIC Science & Technology

    1988-08-30

    usually at welds . 1-15 SECTION II PHASE I - ADVANCED AIRCRAFT HYDRAULIC SYSTEM SELECTION Phase I included Task 1 selection of the aircraft and definition...face was bronze plated. The bearings were 52100 tool steel and the pistons were M50 tool steel. The shoe faces were 4140 with bronze plate and the back...o Magnet assembly o Coil assembly DDV Force Motor - -- ,..._(First Stage) oeMain Control Valve __(Second Sae Main Control Valve LVDT Figure 282 Direct

  9. Low Cost Gyrocompass.

    DTIC Science & Technology

    1984-06-01

    consists of a pair of LVDT’s amplifiers and electro- magnetic forcers. The current through the forcers provide a mea- sure of tilt angle since it measures...suspension system which exhibits the astatic property (zero friction and infinite comnliance). There are no ",rey" or questionable areas in the design since...due to relative bCase translation is achieved by uifirga earailelogra:n trip)od K- nife -edgle arrangement of floxure uoonoe in contras:t to a sirn;Ue

  10. NE-CAT Upgrade of the Bending Magnet Beamline 8BM at the ALS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang Jun; Ogata, Craig; Yang Xiaochun

    2007-01-19

    NE-CAT, North East Collaborative Access Team, bending magnet beamline (8BM) is a beamline for protein crystallography. Recently, the beamline has undergone upgrades of its x-ray optics, control system, and the addition of a robot automounter. The first crystal of the double crystal monochromator was replaced by a new design offered by Oxford Danfysik with a micro-finned, direct water-cooled crystal assembly that would provide better cooling and reduced thermal distortion, pressure induced bulge, and residual strain. Gear reduced motors were added to enhance the torque of the bender and obtain better control. For measuring displacement of the bender directly, two linearmore » variable differential transformers (LVDT) were installed to the second crystal assembly. Early optics characterization and analysis has been carried out. Besides the upgrade of the optical components, the Blu-Ice control system originally developed at SSRL has been implemented. The installation of an automated robotic sample mounting system, from the ALS, was carried out in collaboration with the engineering group at LBNL. Preliminary results are presented.« less

  11. An Investigation of Certain Thermodynamic Losses in Miniature Cryocoolers

    DTIC Science & Technology

    2007-05-02

    system Channel Measurement Parameter Sensor Gain Offset 1 Pressure in Volume A Endevco 2.408 bar/V 6.0206 bar 2 Pressure in Volume C Druck 200...1.9677 bar/V 15.444 bar 3 N/C 4 Pressure in compressor body Druck 820 6.0046 bar/V 8.3174 bar 5 Piston Position Sch’ LVDT 2.0884 mm/V -5.5968 mm...probe (courtesy of Prof. Moriyoshi). Left: 3D view showing two fine thermocouples are in a cross configuration. Right: Side elevation Results

  12. Vacuum Head Checks Foam/Substrate Bonds

    NASA Technical Reports Server (NTRS)

    Lloyd, James F.

    1989-01-01

    Electromechanical inspection system quickly gives measurements indicating adhesion, or lack thereof, between rigid polyurethane foam and aluminum substrate. Does not damage inspected article, easy to operate, and used to perform "go/no-go" evaluations or as supplement to conventional destructive pull-plug testing. Applies vacuum to small area of foam panel and measures distance through which foam pulled into vacuum. Probe head applied to specimen and evacuated through hose to controller/monitor unit. Digital voltmeter in unit reads deflection of LVDT probe head.

  13. Experimental Study for Structural Behaviour of Precast Lightweight Panel (PLP) Under Flexural Load

    NASA Astrophysics Data System (ADS)

    Goh, W. I.; Mohamad, N.; Tay, Y. L.; Rahim, N. H. A.; Jhatial, A. A.; Samad, A. A. A.; Abdullah, R.

    2017-06-01

    Precast lightweight concrete slab is first fabricated in workshop or industrial before construction and then transported to site and installed by skilled labour. It can reduce construction time by minimizing user delay and time for cast-in-situ to increase workability and efficiency. is environmental friendly and helps in resource reduction. Although the foamed concrete has low compressive strength compared to normal weight concrete but it has excellent thermal insulation and sound absorption. It is environmental friendly and helps in resource reduction. To determine the material properties of foamed concrete, nine cubes and six cylindrical specimens were fabricated and the results were recorded. In this study, structural behaviour of precast lightweight panel (PLP) with dry density of 1800 kg/m3 was tested under flexural load. The results were recorded and analysed in terms of ultimate load, crack pattern, load-deflection profiles and strain distribution. Linear Voltage Displacement Transducers (LVDT) and strain gauges were used to determine the deflection and strain distribution of PLP. The theoretical and experimental ultimate load of PLP was analysed and recorded to be 70 and 62 kN respectively, having a difference of 12.9%. Based on the results, it can be observed that PLP can resist the adequate loading. Thus, it can be used in precast industry for construction purposes.

  14. Proceedings of the Annual DARPA/AFGL Seismic Research Symposium (10th) Held in Fallbrook, California on May 3-5, 1988. Addendum

    DTIC Science & Technology

    1990-12-17

    were measured with LVDT’s mounted on rings encircling the sample. Axial force was generated from an amplified D/A sinusoidal voltage and up to 1000...19 inch rack mount or stand-alone, 7.0 inches high 10 Mbit Ethernet With on-board Transceiver 1.5 Mb/Sec Asynchronous SCSI Bus UNIX and C right to...use license HA RD WARE 4 or 12 MByte Expansion Memory Boards Monochrome or Color Video, Monitor, Keyboard, and Mouse 19-inch Rack- Mount Removable Disk

  15. Low-cost, efficient wireless intelligent sensors (LEWIS) measuring real-time reference-free dynamic displacements

    NASA Astrophysics Data System (ADS)

    Ozdagli, A. I.; Liu, B.; Moreu, F.

    2018-07-01

    According to railroad managers, displacement of railroad bridges under service loads is an important parameter in the condition assessment and performance evaluation. However, measuring bridge responses in the field is often costly and labor-intensive. This paper proposes a low-cost, efficient wireless intelligent sensor (LEWIS) platform that can compute in real-time the dynamic transverse displacements of railroad bridges under service loads. This sensing platform drives on an open-source Arduino ecosystem and combines low-cost microcontrollers with affordable accelerometers and wireless transmission modules. The proposed LEWIS system is designed to reconstruct dynamic displacements from acceleration measurements onboard, eliminating the need for offline post-processing, and to transmit the data in real-time to a base station where the inspector at the bridge can see the displacements while the train is crossing, or to a remote office if so desired by internet. Researchers validated the effectiveness of the new LEWIS by conducting a series of laboratory experiments. A shake table setup simulated transverse bridge displacements measured on the field and excited the proposed platform, a commercially available wired expensive accelerometer, and reference LVDT displacement sensor. The responses obtained from the wireless system were compared to the displacements reconstructed from commercial accelerometer readings and the reference LVDT. The results of the laboratory experiments demonstrate that the proposed system is capable of reconstructing transverse displacements of railroad bridges under revenue service traffic accurately and transmitting the data in real-time wirelessly. In conclusion, the platform presented in this paper can be used in the performance assessment of railroad bridge network cost-effectively and accurately. Future work includes collecting real-time reference-free displacements of one railroad bridge in Colorado under train crossings to further prove LEWIS' suitability for engineering applications.

  16. Linear versus non-linear measures of temporal variability in finger tapping and their relation to performance on open- versus closed-loop motor tasks: comparing standard deviations to Lyapunov exponents.

    PubMed

    Christman, Stephen D; Weaver, Ryan

    2008-05-01

    The nature of temporal variability during speeded finger tapping was examined using linear (standard deviation) and non-linear (Lyapunov exponent) measures. Experiment 1 found that right hand tapping was characterised by lower amounts of both linear and non-linear measures of variability than left hand tapping, and that linear and non-linear measures of variability were often negatively correlated with one another. Experiment 2 found that increased non-linear variability was associated with relatively enhanced performance on a closed-loop motor task (mirror tracing) and relatively impaired performance on an open-loop motor task (pointing in a dark room), especially for left hand performance. The potential uses and significance of measures of non-linear variability are discussed.

  17. Application of laser speckle displacement analysis to clinical dentistry

    NASA Astrophysics Data System (ADS)

    Cumberpatch, G. K. D.; Hood, J. A. A.

    1997-03-01

    Success of dental restorations is dependent on the integrity of the tooth/restoration interface. Distortion of teeth due to operative procedures has previously been measured using LVDT's and strain-gauges and has provided useful but limited information. This paper reports on the verification of a system for laser speckle photography and its use to quantitative distortions in teeth from matrix band application and the use of bonded composite resin restorations. Tightening of matrix bands around teeth results in an inward deformation of the cusps, increasing incrementally as the band is tightened. Deflections of 50 micrometer/cusp were recorded. A delayed recovery was noted consistent with the viscoelastic behavior of dentine. For bonded restorations recovery will place the adhesion interface in a state of tension when the band is released and may cause premature failure. Premolar teeth restored with bonded resin restorations exhibited inward displacement of cusps of 12 - 15 micrometer. Deformation was not within the buccal-lingual axis as suggested by prior studies. Molar teeth bonded with composite resin restoration exhibit complex and variable cusp displacement in both magnitude (0 - 30 micrometer) and direction. Complete and partial debonding could be detected. Interproximal cusp bending could be quantitated and lifting of the restoration from the cavity floor was detectable. Deformations evidenced indicate the tooth/restoration interface is in a stressed state and this may subsequently lead to failure. The technique has the potential to aid in development of restoration techniques that minimize residual stress.

  18. Advanced statistics: linear regression, part I: simple linear regression.

    PubMed

    Marill, Keith A

    2004-01-01

    Simple linear regression is a mathematical technique used to model the relationship between a single independent predictor variable and a single dependent outcome variable. In this, the first of a two-part series exploring concepts in linear regression analysis, the four fundamental assumptions and the mechanics of simple linear regression are reviewed. The most common technique used to derive the regression line, the method of least squares, is described. The reader will be acquainted with other important concepts in simple linear regression, including: variable transformations, dummy variables, relationship to inference testing, and leverage. Simplified clinical examples with small datasets and graphic models are used to illustrate the points. This will provide a foundation for the second article in this series: a discussion of multiple linear regression, in which there are multiple predictor variables.

  19. Supporting second grade lower secondary school students’ understanding of linear equation system in two variables using ethnomathematics

    NASA Astrophysics Data System (ADS)

    Nursyahidah, F.; Saputro, B. A.; Rubowo, M. R.

    2018-03-01

    The aim of this research is to know the students’ understanding of linear equation system in two variables using Ethnomathematics and to acquire learning trajectory of linear equation system in two variables for the second grade of lower secondary school students. This research used methodology of design research that consists of three phases, there are preliminary design, teaching experiment, and retrospective analysis. Subject of this study is 28 second grade students of Sekolah Menengah Pertama (SMP) 37 Semarang. The result of this research shows that the students’ understanding in linear equation system in two variables can be stimulated by using Ethnomathematics in selling buying tradition in Peterongan traditional market in Central Java as a context. All of strategies and model that was applied by students and also their result discussion shows how construction and contribution of students can help them to understand concept of linear equation system in two variables. All the activities that were done by students produce learning trajectory to gain the goal of learning. Each steps of learning trajectory of students have an important role in understanding the concept from informal to the formal level. Learning trajectory using Ethnomathematics that is produced consist of watching video of selling buying activity in Peterongan traditional market to construct linear equation in two variables, determine the solution of linear equation in two variables, construct model of linear equation system in two variables from contextual problem, and solving a contextual problem related to linear equation system in two variables.

  20. New Optical Microbarometer

    NASA Astrophysics Data System (ADS)

    Nief, G.; Olivier, N.; Olivier, S.; Hue, A.

    2017-12-01

    Usually, transducers implemented in infrasound sensor (microbarometer) are mainly composed of two associated elements. The first one converts the external pressure variation into a physical linear displacement. The second one converts this motion into an electrical signal. According to this configuration, MB3, MB2000 and MB2005 microbarometers are using an aneroid capsule for the first one, and an electromagnetic transducer (Magnet-coil or LVDT) for the second one. CEA DAM (designer of MB series) and PROLANN / SEISMO WAVE (manufacturer and seller of MB3) have associated their expertise to design a new optical microbarometer: We aim at thinking that changing the electromagnetic transducer by an interferometer is an interesting solution in order to increase the dynamic and the resolution of the sensor. Currently, we are exploring this way in order to propose a future optical microbarometer which will enlarge the panel of infrasound sensors. First, we will present the new transducer principles, taking into account the aneroid capsule and the interferometer using integrated optics technology. More specifically, we will explain the operation of this optical technology, and discuss on its advantages and drawbacks. Secondly, we will present the optical microbarometer in which the interferometer is positioned inside the aneroid capsule under vacuum. The adjustment of the interferometer position is a challenge we solved. The optical measurement is naturally protected from environmental disturbances. Four prototypes were manufactured in order to compare their performances, and also an optical digitizer specifically designed to record the four channels interferometer. Finally, we will present the results we obtained with this sensor (sensitivity, self-noise, effect of environmental disturbance, etc) compared to those of a MB3 microbarometer, and discuss about the advantages of this new sensor.

  1. Interresponse Time Structures in Variable-Ratio and Variable-Interval Schedules

    ERIC Educational Resources Information Center

    Bowers, Matthew T.; Hill, Jade; Palya, William L.

    2008-01-01

    The interresponse-time structures of pigeon key pecking were examined under variable-ratio, variable-interval, and variable-interval plus linear feedback schedules. Whereas the variable-ratio and variable-interval plus linear feedback schedules generally resulted in a distinct group of short interresponse times and a broad distribution of longer…

  2. Defining a Family of Cognitive Diagnosis Models Using Log-Linear Models with Latent Variables

    ERIC Educational Resources Information Center

    Henson, Robert A.; Templin, Jonathan L.; Willse, John T.

    2009-01-01

    This paper uses log-linear models with latent variables (Hagenaars, in "Loglinear Models with Latent Variables," 1993) to define a family of cognitive diagnosis models. In doing so, the relationship between many common models is explicitly defined and discussed. In addition, because the log-linear model with latent variables is a general model for…

  3. A comparative study of wireless and wired sensors networks for deficit irrigation management

    NASA Astrophysics Data System (ADS)

    Torres Sánchez, Roque; Domingo Miguel, Rafael; Valles, Fulgencio Soto; Perez-Pastor, Alejandro; Lopez Riquelme, Juan Antonio; Blanco Montoya, Victor

    2016-04-01

    In recent years, the including of sensors in the context of agricultural water management, has received an increasing interest for the establishment of irrigation strategies, such as regulated deficit irrigation (RDI). These strategies allow a significant improvement of crop water productivity (marketable yield / water applied), especially in woody orchards. The application of these deficit irrigation strategies, requires the monitoring of variables related to the orchard, with the purpose of achieving an efficiently irrigation management, since it is necessary to know the soil and plant water status to achieve the level of water deficit desired in each phenological stage. These parameters involve the measurements of soil and plant parameters, by using appropriate instrumentation devices. Traditional centralized instrumentation systems include soil matric potential, water content and LVDT sensors which information is stored by dataloggers with a wired connection to the sensors. Nowadays, these wired systems are being replaced by wireless ones due, mainly, to cost savings in wiring and labor. These technologies (WSNs) allow monitoring a wide variety of parameters in orchards with high density of sensors using discrete and autonomous nodes in the trees or soil places where it is necessary, without using wires. In this paper we present a trial in a cherry crop orchard, with different irrigation strategies where both a wireless and a wired system have been deployed with the aim of obtaining the best criteria on how to select the most suitable technology in future agronomic monitoring systems. The first stage of this study includes the deploying of nodes, wires, dataloggers and the installation of the sensors (same for both, wired and wireless systems). This stage was done during the first 15 weeks of the trial. Specifically, 40 MPS6 soil matric potential, 20 Enviroscan water content and 40 (LVDT and band) dendometers were installed in order to cover the experimental irrigation trials: Control, Severe deficit, Moderate Deficit, Low Deficit and Traditional irrigation, with 4 repetitions (2 wired and 2 wireless) each one. The main goals were: (i) the ability of WSN for monitoring areas with high density of information, (ii) advantages and disadvantages compared to traditional wired instrumentation, (iii) energy sizing for autonomous operation of WSNs, (iv), strategies for deploying nodes to ensure the robustness of WSN. The main conclusions were: i) The WSNs need less time to be installed than the wired systems, ii) the WSNs is easier to install than the wired one because of the absence of wired links, iii) the advantage of WSNs is increased with high density of measure points, iv) the maintenance is higher in WSNs than the wired centralized systems, v) the acquisition costs is similar in both systems, vi) the installation costs is higher in Wired systems than WSNs, vii) the quality of data is similar in both systems although the data in WSNs are sooner available than wired, viii) the data robustness are higher in wired systems than WSN because of solar panel and battery lacks of WSN nodes. This work has been funded by the Ministerio de Economia y Competitividad AGL2013-49047-C2-1R.

  4. CORRELATION PURSUIT: FORWARD STEPWISE VARIABLE SELECTION FOR INDEX MODELS

    PubMed Central

    Zhong, Wenxuan; Zhang, Tingting; Zhu, Yu; Liu, Jun S.

    2012-01-01

    In this article, a stepwise procedure, correlation pursuit (COP), is developed for variable selection under the sufficient dimension reduction framework, in which the response variable Y is influenced by the predictors X1, X2, …, Xp through an unknown function of a few linear combinations of them. Unlike linear stepwise regression, COP does not impose a special form of relationship (such as linear) between the response variable and the predictor variables. The COP procedure selects variables that attain the maximum correlation between the transformed response and the linear combination of the variables. Various asymptotic properties of the COP procedure are established, and in particular, its variable selection performance under diverging number of predictors and sample size has been investigated. The excellent empirical performance of the COP procedure in comparison with existing methods are demonstrated by both extensive simulation studies and a real example in functional genomics. PMID:23243388

  5. Biostatistics Series Module 6: Correlation and Linear Regression.

    PubMed

    Hazra, Avijit; Gogtay, Nithya

    2016-01-01

    Correlation and linear regression are the most commonly used techniques for quantifying the association between two numeric variables. Correlation quantifies the strength of the linear relationship between paired variables, expressing this as a correlation coefficient. If both variables x and y are normally distributed, we calculate Pearson's correlation coefficient ( r ). If normality assumption is not met for one or both variables in a correlation analysis, a rank correlation coefficient, such as Spearman's rho (ρ) may be calculated. A hypothesis test of correlation tests whether the linear relationship between the two variables holds in the underlying population, in which case it returns a P < 0.05. A 95% confidence interval of the correlation coefficient can also be calculated for an idea of the correlation in the population. The value r 2 denotes the proportion of the variability of the dependent variable y that can be attributed to its linear relation with the independent variable x and is called the coefficient of determination. Linear regression is a technique that attempts to link two correlated variables x and y in the form of a mathematical equation ( y = a + bx ), such that given the value of one variable the other may be predicted. In general, the method of least squares is applied to obtain the equation of the regression line. Correlation and linear regression analysis are based on certain assumptions pertaining to the data sets. If these assumptions are not met, misleading conclusions may be drawn. The first assumption is that of linear relationship between the two variables. A scatter plot is essential before embarking on any correlation-regression analysis to show that this is indeed the case. Outliers or clustering within data sets can distort the correlation coefficient value. Finally, it is vital to remember that though strong correlation can be a pointer toward causation, the two are not synonymous.

  6. Biostatistics Series Module 6: Correlation and Linear Regression

    PubMed Central

    Hazra, Avijit; Gogtay, Nithya

    2016-01-01

    Correlation and linear regression are the most commonly used techniques for quantifying the association between two numeric variables. Correlation quantifies the strength of the linear relationship between paired variables, expressing this as a correlation coefficient. If both variables x and y are normally distributed, we calculate Pearson's correlation coefficient (r). If normality assumption is not met for one or both variables in a correlation analysis, a rank correlation coefficient, such as Spearman's rho (ρ) may be calculated. A hypothesis test of correlation tests whether the linear relationship between the two variables holds in the underlying population, in which case it returns a P < 0.05. A 95% confidence interval of the correlation coefficient can also be calculated for an idea of the correlation in the population. The value r2 denotes the proportion of the variability of the dependent variable y that can be attributed to its linear relation with the independent variable x and is called the coefficient of determination. Linear regression is a technique that attempts to link two correlated variables x and y in the form of a mathematical equation (y = a + bx), such that given the value of one variable the other may be predicted. In general, the method of least squares is applied to obtain the equation of the regression line. Correlation and linear regression analysis are based on certain assumptions pertaining to the data sets. If these assumptions are not met, misleading conclusions may be drawn. The first assumption is that of linear relationship between the two variables. A scatter plot is essential before embarking on any correlation-regression analysis to show that this is indeed the case. Outliers or clustering within data sets can distort the correlation coefficient value. Finally, it is vital to remember that though strong correlation can be a pointer toward causation, the two are not synonymous. PMID:27904175

  7. Expanding the occupational health methodology: A concatenated artificial neural network approach to model the burnout process in Chinese nurses.

    PubMed

    Ladstätter, Felix; Garrosa, Eva; Moreno-Jiménez, Bernardo; Ponsoda, Vicente; Reales Aviles, José Manuel; Dai, Junming

    2016-01-01

    Artificial neural networks are sophisticated modelling and prediction tools capable of extracting complex, non-linear relationships between predictor (input) and predicted (output) variables. This study explores this capacity by modelling non-linearities in the hardiness-modulated burnout process with a neural network. Specifically, two multi-layer feed-forward artificial neural networks are concatenated in an attempt to model the composite non-linear burnout process. Sensitivity analysis, a Monte Carlo-based global simulation technique, is then utilised to examine the first-order effects of the predictor variables on the burnout sub-dimensions and consequences. Results show that (1) this concatenated artificial neural network approach is feasible to model the burnout process, (2) sensitivity analysis is a prolific method to study the relative importance of predictor variables and (3) the relationships among variables involved in the development of burnout and its consequences are to different degrees non-linear. Many relationships among variables (e.g., stressors and strains) are not linear, yet researchers use linear methods such as Pearson correlation or linear regression to analyse these relationships. Artificial neural network analysis is an innovative method to analyse non-linear relationships and in combination with sensitivity analysis superior to linear methods.

  8. Supporting Students' Understanding of Linear Equations with One Variable Using Algebra Tiles

    ERIC Educational Resources Information Center

    Saraswati, Sari; Putri, Ratu Ilma Indra; Somakim

    2016-01-01

    This research aimed to describe how algebra tiles can support students' understanding of linear equations with one variable. This article is a part of a larger research on learning design of linear equations with one variable using algebra tiles combined with balancing method. Therefore, it will merely discuss one activity focused on how students…

  9. State-variable analysis of non-linear circuits with a desk computer

    NASA Technical Reports Server (NTRS)

    Cohen, E.

    1981-01-01

    State variable analysis was used to analyze the transient performance of non-linear circuits on a desk top computer. The non-linearities considered were not restricted to any circuit element. All that is required for analysis is the relationship defining each non-linearity be known in terms of points on a curve.

  10. A duality approach for solving bounded linear programming problems with fuzzy variables based on ranking functions and its application in bounded transportation problems

    NASA Astrophysics Data System (ADS)

    Ebrahimnejad, Ali

    2015-08-01

    There are several methods, in the literature, for solving fuzzy variable linear programming problems (fuzzy linear programming in which the right-hand-side vectors and decision variables are represented by trapezoidal fuzzy numbers). In this paper, the shortcomings of some existing methods are pointed out and to overcome these shortcomings a new method based on the bounded dual simplex method is proposed to determine the fuzzy optimal solution of that kind of fuzzy variable linear programming problems in which some or all variables are restricted to lie within lower and upper bounds. To illustrate the proposed method, an application example is solved and the obtained results are given. The advantages of the proposed method over existing methods are discussed. Also, one application of this algorithm in solving bounded transportation problems with fuzzy supplies and demands is dealt with. The proposed method is easy to understand and to apply for determining the fuzzy optimal solution of bounded fuzzy variable linear programming problems occurring in real-life situations.

  11. Advanced statistics: linear regression, part II: multiple linear regression.

    PubMed

    Marill, Keith A

    2004-01-01

    The applications of simple linear regression in medical research are limited, because in most situations, there are multiple relevant predictor variables. Univariate statistical techniques such as simple linear regression use a single predictor variable, and they often may be mathematically correct but clinically misleading. Multiple linear regression is a mathematical technique used to model the relationship between multiple independent predictor variables and a single dependent outcome variable. It is used in medical research to model observational data, as well as in diagnostic and therapeutic studies in which the outcome is dependent on more than one factor. Although the technique generally is limited to data that can be expressed with a linear function, it benefits from a well-developed mathematical framework that yields unique solutions and exact confidence intervals for regression coefficients. Building on Part I of this series, this article acquaints the reader with some of the important concepts in multiple regression analysis. These include multicollinearity, interaction effects, and an expansion of the discussion of inference testing, leverage, and variable transformations to multivariate models. Examples from the first article in this series are expanded on using a primarily graphic, rather than mathematical, approach. The importance of the relationships among the predictor variables and the dependence of the multivariate model coefficients on the choice of these variables are stressed. Finally, concepts in regression model building are discussed.

  12. How Robust Is Linear Regression with Dummy Variables?

    ERIC Educational Resources Information Center

    Blankmeyer, Eric

    2006-01-01

    Researchers in education and the social sciences make extensive use of linear regression models in which the dependent variable is continuous-valued while the explanatory variables are a combination of continuous-valued regressors and dummy variables. The dummies partition the sample into groups, some of which may contain only a few observations.…

  13. Unitary Response Regression Models

    ERIC Educational Resources Information Center

    Lipovetsky, S.

    2007-01-01

    The dependent variable in a regular linear regression is a numerical variable, and in a logistic regression it is a binary or categorical variable. In these models the dependent variable has varying values. However, there are problems yielding an identity output of a constant value which can also be modelled in a linear or logistic regression with…

  14. Linear quadratic optimization for positive LTI system

    NASA Astrophysics Data System (ADS)

    Muhafzan, Yenti, Syafrida Wirma; Zulakmal

    2017-05-01

    Nowaday the linear quadratic optimization subject to positive linear time invariant (LTI) system constitute an interesting study considering it can become a mathematical model of variety of real problem whose variables have to nonnegative and trajectories generated by these variables must be nonnegative. In this paper we propose a method to generate an optimal control of linear quadratic optimization subject to positive linear time invariant (LTI) system. A sufficient condition that guarantee the existence of such optimal control is discussed.

  15. Correlation and simple linear regression.

    PubMed

    Zou, Kelly H; Tuncali, Kemal; Silverman, Stuart G

    2003-06-01

    In this tutorial article, the concepts of correlation and regression are reviewed and demonstrated. The authors review and compare two correlation coefficients, the Pearson correlation coefficient and the Spearman rho, for measuring linear and nonlinear relationships between two continuous variables. In the case of measuring the linear relationship between a predictor and an outcome variable, simple linear regression analysis is conducted. These statistical concepts are illustrated by using a data set from published literature to assess a computed tomography-guided interventional technique. These statistical methods are important for exploring the relationships between variables and can be applied to many radiologic studies.

  16. Linear Relationship between Resilience, Learning Approaches, and Coping Strategies to Predict Achievement in Undergraduate Students

    PubMed Central

    de la Fuente, Jesús; Fernández-Cabezas, María; Cambil, Matilde; Vera, Manuel M.; González-Torres, Maria Carmen; Artuch-Garde, Raquel

    2017-01-01

    The aim of the present research was to analyze the linear relationship between resilience (meta-motivational variable), learning approaches (meta-cognitive variables), strategies for coping with academic stress (meta-emotional variable) and academic achievement, necessary in the context of university academic stress. A total of 656 students from a southern university in Spain completed different questionnaires: a resiliency scale, a coping strategies scale, and a study process questionnaire. Correlations and structural modeling were used for data analyses. There was a positive and significant linear association showing a relationship of association and prediction of resilience to the deep learning approach, and problem-centered coping strategies. In a complementary way, these variables positively and significantly predicted the academic achievement of university students. These results enabled a linear relationship of association and consistent and differential prediction to be established among the variables studied. Implications for future research are set out. PMID:28713298

  17. Data-driven discovery of Koopman eigenfunctions using deep learning

    NASA Astrophysics Data System (ADS)

    Lusch, Bethany; Brunton, Steven L.; Kutz, J. Nathan

    2017-11-01

    Koopman operator theory transforms any autonomous non-linear dynamical system into an infinite-dimensional linear system. Since linear systems are well-understood, a mapping of non-linear dynamics to linear dynamics provides a powerful approach to understanding and controlling fluid flows. However, finding the correct change of variables remains an open challenge. We present a strategy to discover an approximate mapping using deep learning. Our neural networks find this change of variables, its inverse, and a finite-dimensional linear dynamical system defined on the new variables. Our method is completely data-driven and only requires measurements of the system, i.e. it does not require derivatives or knowledge of the governing equations. We find a minimal set of approximate Koopman eigenfunctions that are sufficient to reconstruct and advance the system to future states. We demonstrate the method on several dynamical systems.

  18. Using Log Linear Analysis for Categorical Family Variables.

    ERIC Educational Resources Information Center

    Moen, Phyllis

    The Goodman technique of log linear analysis is ideal for family research, because it is designed for categorical (non-quantitative) variables. Variables are dichotomized (for example, married/divorced, childless/with children) or otherwise categorized (for example, level of permissiveness, life cycle stage). Contingency tables are then…

  19. A FORTRAN technique for correlating a circular environmental variable with a linear physiological variable in the sugar maple.

    PubMed

    Pease, J M; Morselli, M F

    1987-01-01

    This paper deals with a computer program adapted to a statistical method for analyzing an unlimited quantity of binary recorded data of an independent circular variable (e.g. wind direction), and a linear variable (e.g. maple sap flow volume). Circular variables cannot be statistically analyzed with linear methods, unless they have been transformed. The program calculates a critical quantity, the acrophase angle (PHI, phi o). The technique is adapted from original mathematics [1] and is written in Fortran 77 for easier conversion between computer networks. Correlation analysis can be performed following the program or regression which, because of the circular nature of the independent variable, becomes periodic regression. The technique was tested on a file of approximately 4050 data pairs.

  20. Application of Local Linear Embedding to Nonlinear Exploratory Latent Structure Analysis

    ERIC Educational Resources Information Center

    Wang, Haonan; Iyer, Hari

    2007-01-01

    In this paper we discuss the use of a recent dimension reduction technique called Locally Linear Embedding, introduced by Roweis and Saul, for performing an exploratory latent structure analysis. The coordinate variables from the locally linear embedding describing the manifold on which the data reside serve as the latent variable scores. We…

  1. Bayesian dynamical systems modelling in the social sciences.

    PubMed

    Ranganathan, Shyam; Spaiser, Viktoria; Mann, Richard P; Sumpter, David J T

    2014-01-01

    Data arising from social systems is often highly complex, involving non-linear relationships between the macro-level variables that characterize these systems. We present a method for analyzing this type of longitudinal or panel data using differential equations. We identify the best non-linear functions that capture interactions between variables, employing Bayes factor to decide how many interaction terms should be included in the model. This method punishes overly complicated models and identifies models with the most explanatory power. We illustrate our approach on the classic example of relating democracy and economic growth, identifying non-linear relationships between these two variables. We show how multiple variables and variable lags can be accounted for and provide a toolbox in R to implement our approach.

  2. Preparation and Characterization of Nitinol Bone Staples for Cranio-Maxillofacial Surgery

    NASA Astrophysics Data System (ADS)

    Lekston, Z.; Stróż, D.; Jędrusik-Pawłowska, M.

    2012-12-01

    The aim of this work was to form NiTi and TiNiCo body temperature activated and superelastic staples for clinical joining of mandible and face bone fractures. The alloys were obtained by VIM technique. Hot and cold processing was applied to obtain wires of required diameters. The martensitic transformation was studied by DSC, XRD, and TEM. The shape memory effects were measured by a bend and free recovery ASTM F2082-06 test. The superelasticity was recorded in the tension stress-strain and by the three-point bending cycles in an instrument equipped with a Hottinger force transducer and LVDT. Excellent superelastic behavior of TiNiCo wires was obtained after cold working and annealing at 400-500 °C. The body temperature activated shape memory staples were applied for fixation of mandibular condyle fractures. In experiments on the skull models, fixation of the facial fractures by using shape memory and superelastic staples were compared. The superelastic staples were used in osteosynthesis of zygomatico-maxillo-orbital fractures.

  3. Comparison of Classifiers for Decoding Sensory and Cognitive Information from Prefrontal Neuronal Populations

    PubMed Central

    Astrand, Elaine; Enel, Pierre; Ibos, Guilhem; Dominey, Peter Ford; Baraduc, Pierre; Ben Hamed, Suliann

    2014-01-01

    Decoding neuronal information is important in neuroscience, both as a basic means to understand how neuronal activity is related to cerebral function and as a processing stage in driving neuroprosthetic effectors. Here, we compare the readout performance of six commonly used classifiers at decoding two different variables encoded by the spiking activity of the non-human primate frontal eye fields (FEF): the spatial position of a visual cue, and the instructed orientation of the animal's attention. While the first variable is exogenously driven by the environment, the second variable corresponds to the interpretation of the instruction conveyed by the cue; it is endogenously driven and corresponds to the output of internal cognitive operations performed on the visual attributes of the cue. These two variables were decoded using either a regularized optimal linear estimator in its explicit formulation, an optimal linear artificial neural network estimator, a non-linear artificial neural network estimator, a non-linear naïve Bayesian estimator, a non-linear Reservoir recurrent network classifier or a non-linear Support Vector Machine classifier. Our results suggest that endogenous information such as the orientation of attention can be decoded from the FEF with the same accuracy as exogenous visual information. All classifiers did not behave equally in the face of population size and heterogeneity, the available training and testing trials, the subject's behavior and the temporal structure of the variable of interest. In most situations, the regularized optimal linear estimator and the non-linear Support Vector Machine classifiers outperformed the other tested decoders. PMID:24466019

  4. LINEAR - DERIVATION AND DEFINITION OF A LINEAR AIRCRAFT MODEL

    NASA Technical Reports Server (NTRS)

    Duke, E. L.

    1994-01-01

    The Derivation and Definition of a Linear Model program, LINEAR, provides the user with a powerful and flexible tool for the linearization of aircraft aerodynamic models. LINEAR was developed to provide a standard, documented, and verified tool to derive linear models for aircraft stability analysis and control law design. Linear system models define the aircraft system in the neighborhood of an analysis point and are determined by the linearization of the nonlinear equations defining vehicle dynamics and sensors. LINEAR numerically determines a linear system model using nonlinear equations of motion and a user supplied linear or nonlinear aerodynamic model. The nonlinear equations of motion used are six-degree-of-freedom equations with stationary atmosphere and flat, nonrotating earth assumptions. LINEAR is capable of extracting both linearized engine effects, such as net thrust, torque, and gyroscopic effects and including these effects in the linear system model. The point at which this linear model is defined is determined either by completely specifying the state and control variables, or by specifying an analysis point on a trajectory and directing the program to determine the control variables and the remaining state variables. The system model determined by LINEAR consists of matrices for both the state and observation equations. The program has been designed to provide easy selection of state, control, and observation variables to be used in a particular model. Thus, the order of the system model is completely under user control. Further, the program provides the flexibility of allowing alternate formulations of both the state and observation equations. Data describing the aircraft and the test case is input to the program through a terminal or formatted data files. All data can be modified interactively from case to case. The aerodynamic model can be defined in two ways: a set of nondimensional stability and control derivatives for the flight point of interest, or a full non-linear aerodynamic model as used in simulations. LINEAR is written in FORTRAN and has been implemented on a DEC VAX computer operating under VMS with a virtual memory requirement of approximately 296K of 8 bit bytes. Both an interactive and batch version are included. LINEAR was developed in 1988.

  5. Variable-energy drift-tube linear accelerator

    DOEpatents

    Swenson, Donald A.; Boyd, Jr., Thomas J.; Potter, James M.; Stovall, James E.

    1984-01-01

    A linear accelerator system includes a plurality of post-coupled drift-tubes wherein each post coupler is bistably positionable to either of two positions which result in different field distributions. With binary control over a plurality of post couplers, a significant accumlative effect in the resulting field distribution is achieved yielding a variable-energy drift-tube linear accelerator.

  6. Variable-energy drift-tube linear accelerator

    DOEpatents

    Swenson, D.A.; Boyd, T.J. Jr.; Potter, J.M.; Stovall, J.E.

    A linear accelerator system includes a plurality of post-coupled drift-tubes wherein each post coupler is bistably positionable to either of two positions which result in different field distributions. With binary control over a plurality of post couplers, a significant accumlative effect in the resulting field distribution is achieved yielding a variable-energy drift-tube linear accelerator.

  7. Variables Predicting Foreign Language Reading Comprehension and Vocabulary Acquisition in a Linear Hypermedia Environment

    ERIC Educational Resources Information Center

    Akbulut, Yavuz

    2007-01-01

    Factors predicting vocabulary learning and reading comprehension of advanced language learners of English in a linear multimedia text were investigated in the current study. Predictor variables of interest were multimedia type, reading proficiency, learning styles, topic interest and background knowledge about the topic. The outcome variables of…

  8. New optical microbarometer

    NASA Astrophysics Data System (ADS)

    Olivier, Serge; Hue, Anthony; Olivier, Nathalie; Le Mallet, Serge

    2015-04-01

    Usually, transducers implemented in infrasound sensor (microbarometer) are mainly composed of two associated elements. The first one converts the external pressure variation into a physical linear displacement. The second one converts this motion into an electrical signal. According to this configuration, MB3, MB2000 and MB2005 microbarometers are using an aneroid capsule for the first one, and an electromagnetic transducer (Magnet-coil or LVDT) for the second one. CEA DAM (designer of MB series) and PROLANN / SEISMO WAVE (manufacturer and seller of MB3) have associated their expertise to design an optical microbarometer: However, we think that changing the electromagnetic transducer by an interferometer is an interesting solution in order to increase the dynamic and the resolution of the sensor. Currently, we are exploring this way in order to propose a future optical microbarometer which will enlarge the panel of infrasound sensors. Firstly, we will present the new transducer principles, taking into account the aneroid capsule and the interferometer using integrated optics technology. More specifically, we will explain the operation of this optical technology, and discuss on its advantages and defaults. Secondly, we will present the first part of this project in which the interferometer is positioned outside the aneroid capsule. In this configuration, interferometer mechanical adjustments are easier, but measurement is directly disturbed by environmental effects like the thermal variations. Six prototypes were manufactured with two sets of different aneroid capsules, in order to compare their performances, and also an optical digitizer specifically designed to record the four channels interferometer. Then, we will present the first sensitivity and self-noise measurement results compared to those of a MB2005 microbarometer. Finally, we will propose a new design of the optical microbarometer as a second part of our study. It will implement a new location of interferometer into the aneroid capsule under vacuum in order to protect the optical measurement from environmental effects. Manufacturing such a prototype is a huge challenge from the miniaturization point of view and the interferometer mechanical stability.

  9. Do bioclimate variables improve performance of climate envelope models?

    USGS Publications Warehouse

    Watling, James I.; Romañach, Stephanie S.; Bucklin, David N.; Speroterra, Carolina; Brandt, Laura A.; Pearlstine, Leonard G.; Mazzotti, Frank J.

    2012-01-01

    Climate envelope models are widely used to forecast potential effects of climate change on species distributions. A key issue in climate envelope modeling is the selection of predictor variables that most directly influence species. To determine whether model performance and spatial predictions were related to the selection of predictor variables, we compared models using bioclimate variables with models constructed from monthly climate data for twelve terrestrial vertebrate species in the southeastern USA using two different algorithms (random forests or generalized linear models), and two model selection techniques (using uncorrelated predictors or a subset of user-defined biologically relevant predictor variables). There were no differences in performance between models created with bioclimate or monthly variables, but one metric of model performance was significantly greater using the random forest algorithm compared with generalized linear models. Spatial predictions between maps using bioclimate and monthly variables were very consistent using the random forest algorithm with uncorrelated predictors, whereas we observed greater variability in predictions using generalized linear models.

  10. Quantifying the Contribution of Wind-Driven Linear Response to the Seasonal and Interannual Variability of Amoc Volume Transports Across 26.5ºN

    NASA Astrophysics Data System (ADS)

    Shimizu, K.; von Storch, J. S.; Haak, H.; Nakayama, K.; Marotzke, J.

    2014-12-01

    Surface wind stress is considered to be an important forcing of the seasonal and interannual variability of Atlantic Meridional Overturning Circulation (AMOC) volume transports. A recent study showed that even linear response to wind forcing captures observed features of the mean seasonal cycle. However, the study did not assess the contribution of wind-driven linear response in realistic conditions against the RAPID/MOCHA array observation or Ocean General Circulation Model (OGCM) simulations, because it applied a linear two-layer model to the Atlantic assuming constant upper layer thickness and density difference across the interface. Here, we quantify the contribution of wind-driven linear response to the seasonal and interannual variability of AMOC transports by comparing wind-driven linear simulations under realistic continuous stratification against the RAPID observation and OCGM (MPI-OM) simulations with 0.4º resolution (TP04) and 0.1º resolution (STORM). All the linear and MPI-OM simulations capture more than 60% of the variance in the observed mean seasonal cycle of the Upper Mid-Ocean (UMO) and Florida Strait (FS) transports, two components of the upper branch of the AMOC. The linear and TP04 simulations also capture 25-40% of the variance in the observed transport time series between Apr 2004 and Oct 2012; the STORM simulation does not capture the observed variance because of the stochastic signal in both datasets. Comparison of half-overlapping 12-month-long segments reveals some periods when the linear and TP04 simulations capture 40-60% of the observed variance, as well as other periods when the simulations capture only 0-20% of the variance. These results show that wind-driven linear response is a major contributor to the seasonal and interannual variability of the UMO and FS transports, and that its contribution varies in an interannual timescale, probably due to the variability of stochastic processes.

  11. A Partitioning and Bounded Variable Algorithm for Linear Programming

    ERIC Educational Resources Information Center

    Sheskin, Theodore J.

    2006-01-01

    An interesting new partitioning and bounded variable algorithm (PBVA) is proposed for solving linear programming problems. The PBVA is a variant of the simplex algorithm which uses a modified form of the simplex method followed by the dual simplex method for bounded variables. In contrast to the two-phase method and the big M method, the PBVA does…

  12. Simple linear and multivariate regression models.

    PubMed

    Rodríguez del Águila, M M; Benítez-Parejo, N

    2011-01-01

    In biomedical research it is common to find problems in which we wish to relate a response variable to one or more variables capable of describing the behaviour of the former variable by means of mathematical models. Regression techniques are used to this effect, in which an equation is determined relating the two variables. While such equations can have different forms, linear equations are the most widely used form and are easy to interpret. The present article describes simple and multiple linear regression models, how they are calculated, and how their applicability assumptions are checked. Illustrative examples are provided, based on the use of the freely accessible R program. Copyright © 2011 SEICAP. Published by Elsevier Espana. All rights reserved.

  13. Key-Generation Algorithms for Linear Piece In Hand Matrix Method

    NASA Astrophysics Data System (ADS)

    Tadaki, Kohtaro; Tsujii, Shigeo

    The linear Piece In Hand (PH, for short) matrix method with random variables was proposed in our former work. It is a general prescription which can be applicable to any type of multivariate public-key cryptosystems for the purpose of enhancing their security. Actually, we showed, in an experimental manner, that the linear PH matrix method with random variables can certainly enhance the security of HFE against the Gröbner basis attack, where HFE is one of the major variants of multivariate public-key cryptosystems. In 1998 Patarin, Goubin, and Courtois introduced the plus method as a general prescription which aims to enhance the security of any given MPKC, just like the linear PH matrix method with random variables. In this paper we prove the equivalence between the plus method and the primitive linear PH matrix method, which is introduced by our previous work to explain the notion of the PH matrix method in general in an illustrative manner and not for a practical use to enhance the security of any given MPKC. Based on this equivalence, we show that the linear PH matrix method with random variables has the substantial advantage over the plus method with respect to the security enhancement. In the linear PH matrix method with random variables, the three matrices, including the PH matrix, play a central role in the secret-key and public-key. In this paper, we clarify how to generate these matrices and thus present two probabilistic polynomial-time algorithms to generate these matrices. In particular, the second one has a concise form, and is obtained as a byproduct of the proof of the equivalence between the plus method and the primitive linear PH matrix method.

  14. Cotton-type and joint invariants for linear elliptic systems.

    PubMed

    Aslam, A; Mahomed, F M

    2013-01-01

    Cotton-type invariants for a subclass of a system of two linear elliptic equations, obtainable from a complex base linear elliptic equation, are derived both by spliting of the corresponding complex Cotton invariants of the base complex equation and from the Laplace-type invariants of the system of linear hyperbolic equations equivalent to the system of linear elliptic equations via linear complex transformations of the independent variables. It is shown that Cotton-type invariants derived from these two approaches are identical. Furthermore, Cotton-type and joint invariants for a general system of two linear elliptic equations are also obtained from the Laplace-type and joint invariants for a system of two linear hyperbolic equations equivalent to the system of linear elliptic equations by complex changes of the independent variables. Examples are presented to illustrate the results.

  15. Cotton-Type and Joint Invariants for Linear Elliptic Systems

    PubMed Central

    Aslam, A.; Mahomed, F. M.

    2013-01-01

    Cotton-type invariants for a subclass of a system of two linear elliptic equations, obtainable from a complex base linear elliptic equation, are derived both by spliting of the corresponding complex Cotton invariants of the base complex equation and from the Laplace-type invariants of the system of linear hyperbolic equations equivalent to the system of linear elliptic equations via linear complex transformations of the independent variables. It is shown that Cotton-type invariants derived from these two approaches are identical. Furthermore, Cotton-type and joint invariants for a general system of two linear elliptic equations are also obtained from the Laplace-type and joint invariants for a system of two linear hyperbolic equations equivalent to the system of linear elliptic equations by complex changes of the independent variables. Examples are presented to illustrate the results. PMID:24453871

  16. Rank-based estimation in the {ell}1-regularized partly linear model for censored outcomes with application to integrated analyses of clinical predictors and gene expression data.

    PubMed

    Johnson, Brent A

    2009-10-01

    We consider estimation and variable selection in the partial linear model for censored data. The partial linear model for censored data is a direct extension of the accelerated failure time model, the latter of which is a very important alternative model to the proportional hazards model. We extend rank-based lasso-type estimators to a model that may contain nonlinear effects. Variable selection in such partial linear model has direct application to high-dimensional survival analyses that attempt to adjust for clinical predictors. In the microarray setting, previous methods can adjust for other clinical predictors by assuming that clinical and gene expression data enter the model linearly in the same fashion. Here, we select important variables after adjusting for prognostic clinical variables but the clinical effects are assumed nonlinear. Our estimator is based on stratification and can be extended naturally to account for multiple nonlinear effects. We illustrate the utility of our method through simulation studies and application to the Wisconsin prognostic breast cancer data set.

  17. GWM-a ground-water management process for the U.S. Geological Survey modular ground-water model (MODFLOW-2000)

    USGS Publications Warehouse

    Ahlfeld, David P.; Barlow, Paul M.; Mulligan, Anne E.

    2005-01-01

    GWM is a Ground?Water Management Process for the U.S. Geological Survey modular three?dimensional ground?water model, MODFLOW?2000. GWM uses a response?matrix approach to solve several types of linear, nonlinear, and mixed?binary linear ground?water management formulations. Each management formulation consists of a set of decision variables, an objective function, and a set of constraints. Three types of decision variables are supported by GWM: flow?rate decision variables, which are withdrawal or injection rates at well sites; external decision variables, which are sources or sinks of water that are external to the flow model and do not directly affect the state variables of the simulated ground?water system (heads, streamflows, and so forth); and binary variables, which have values of 0 or 1 and are used to define the status of flow?rate or external decision variables. Flow?rate decision variables can represent wells that extend over one or more model cells and be active during one or more model stress periods; external variables also can be active during one or more stress periods. A single objective function is supported by GWM, which can be specified to either minimize or maximize the weighted sum of the three types of decision variables. Four types of constraints can be specified in a GWM formulation: upper and lower bounds on the flow?rate and external decision variables; linear summations of the three types of decision variables; hydraulic?head based constraints, including drawdowns, head differences, and head gradients; and streamflow and streamflow?depletion constraints. The Response Matrix Solution (RMS) Package of GWM uses the Ground?Water Flow Process of MODFLOW to calculate the change in head at each constraint location that results from a perturbation of a flow?rate variable; these changes are used to calculate the response coefficients. For linear management formulations, the resulting matrix of response coefficients is then combined with other components of the linear management formulation to form a complete linear formulation; the formulation is then solved by use of the simplex algorithm, which is incorporated into the RMS Package. Nonlinear formulations arise for simulated conditions that include water?table (unconfined) aquifers or head?dependent boundary conditions (such as streams, drains, or evapotranspiration from the water table). Nonlinear formulations are solved by sequential linear programming; that is, repeated linearization of the nonlinear features of the management problem. In this approach, response coefficients are recalculated for each iteration of the solution process. Mixed?binary linear (or mildly nonlinear) formulations are solved by use of the branch and bound algorithm, which is also incorporated into the RMS Package. Three sample problems are provided to demonstrate the use of GWM for typical ground?water flow management problems. These sample problems provide examples of how GWM input files are constructed to specify the decision variables, objective function, constraints, and solution process for a GWM run. The GWM Process runs with the MODFLOW?2000 Global and Ground?Water Flow Processes, but in its current form GWM cannot be used with the Observation, Sensitivity, Parameter?Estimation, or Ground?Water Transport Processes. The GWM Process is written with a modular structure so that new objective functions, constraint types, and solution algorithms can be added.

  18. Free piston variable-stroke linear-alternator generator

    DOEpatents

    Haaland, Carsten M.

    1998-01-01

    A free-piston variable stroke linear-alternator AC power generator for a combustion engine. An alternator mechanism and oscillator system generates AC current. The oscillation system includes two oscillation devices each having a combustion cylinder and a flying turnbuckle. The flying turnbuckle moves in accordance with the oscillation device. The alternator system is a linear alternator coupled between the two oscillation devices by a slotted connecting rod.

  19. Relationship Between Motor Variability, Accuracy, and Ball Speed in the Tennis Serve

    PubMed Central

    Antúnez, Ruperto Menayo; Hernández, Francisco Javier Moreno; García, Juan Pedro Fuentes; Vaíllo, Raúl Reina; Arroyo, Jesús Sebastián Damas

    2012-01-01

    The main objective of this study was to analyze the motor variability in the performance of the tennis serve and its relationship to performance outcome. Seventeen male tennis players took part in the research, and they performed 20 serves. Linear and non-linear variability during the hand movement was measured by 3D Motion Tracking. Ball speed was recorded with a sports radar gun and the ball bounces were video recorded to calculate accuracy. The results showed a relationship between the amount of variability and its non-linear structure found in performance of movement and the outcome of the serve. The study also found that movement predictability correlates with performance. An increase in the amount of movement variability could affect the tennis serve performance in a negative way by reducing speed and accuracy of the ball. PMID:23486998

  20. Phase-Controlled Polarization Modulators

    NASA Technical Reports Server (NTRS)

    Chuss, D. T.; Wollack, E. J.; Novak, G.; Moseley, S. H.; Pisano, G.; Krejny, M.; U-Yen, K.

    2012-01-01

    We report technology development of millimeter/submillimeter polarization modulators that operate by introducing a a variable, controlled phase delay between two orthogonal polarization states. The variable-delay polarization modulator (VPM) operates via the introduction of a variable phase delay between two linear orthogonal polarization states, resulting in a variable mapping of a single linear polarization into a combination of that Stokes parameter and circular (Stokes V) polarization. Characterization of a prototype VPM is presented at 350 and 3000 microns. We also describe a modulator in which a variable phase delay is introduced between right- and left- circular polarization states. In this architecture, linear polarization is fully modulated. Each of these devices consists of a polarization diplexer parallel to and in front of a movable mirror. Modulation involves sub-wavelength translations of the mirror that change the magnitude of the phase delay.

  1. Linear and angular control of circular walking in healthy older adults and subjects with cerebellar ataxia.

    PubMed

    Goodworth, Adam D; Paquette, Caroline; Jones, Geoffrey Melvill; Block, Edward W; Fletcher, William A; Hu, Bin; Horak, Fay B

    2012-05-01

    Linear and angular control of trunk and leg motion during curvilinear navigation was investigated in subjects with cerebellar ataxia and age-matched control subjects. Subjects walked with eyes open around a 1.2-m circle. The relationship of linear to angular motion was quantified by determining the ratios of trunk linear velocity to trunk angular velocity and foot linear position to foot angular position. Errors in walking radius (the ratio of linear to angular motion) also were quantified continuously during the circular walk. Relative variability of linear and angular measures was compared using coefficients of variation (CoV). Patterns of variability were compared using power spectral analysis for the trunk and auto-covariance analysis for the feet. Errors in radius were significantly increased in patients with cerebellar damage as compared to controls. Cerebellar subjects had significantly larger CoV of feet and trunk in angular, but not linear, motion. Control subjects also showed larger CoV in angular compared to linear motion of the feet and trunk. Angular and linear components of stepping differed in that angular, but not linear, foot placement had a negative correlation from one stride to the next. Thus, walking in a circle was associated with more, and a different type of, variability in angular compared to linear motion. Results are consistent with increased difficulty of, and role of the cerebellum in, control of angular trunk and foot motion for curvilinear locomotion.

  2. Instrumental Variable Analysis with a Nonlinear Exposure–Outcome Relationship

    PubMed Central

    Davies, Neil M.; Thompson, Simon G.

    2014-01-01

    Background: Instrumental variable methods can estimate the causal effect of an exposure on an outcome using observational data. Many instrumental variable methods assume that the exposure–outcome relation is linear, but in practice this assumption is often in doubt, or perhaps the shape of the relation is a target for investigation. We investigate this issue in the context of Mendelian randomization, the use of genetic variants as instrumental variables. Methods: Using simulations, we demonstrate the performance of a simple linear instrumental variable method when the true shape of the exposure–outcome relation is not linear. We also present a novel method for estimating the effect of the exposure on the outcome within strata of the exposure distribution. This enables the estimation of localized average causal effects within quantile groups of the exposure or as a continuous function of the exposure using a sliding window approach. Results: Our simulations suggest that linear instrumental variable estimates approximate a population-averaged causal effect. This is the average difference in the outcome if the exposure for every individual in the population is increased by a fixed amount. Estimates of localized average causal effects reveal the shape of the exposure–outcome relation for a variety of models. These methods are used to investigate the relations between body mass index and a range of cardiovascular risk factors. Conclusions: Nonlinear exposure–outcome relations should not be a barrier to instrumental variable analyses. When the exposure–outcome relation is not linear, either a population-averaged causal effect or the shape of the exposure–outcome relation can be estimated. PMID:25166881

  3. Free piston variable-stroke linear-alternator generator

    DOEpatents

    Haaland, C.M.

    1998-12-15

    A free-piston variable stroke linear-alternator AC power generator for a combustion engine is described. An alternator mechanism and oscillator system generates AC current. The oscillation system includes two oscillation devices each having a combustion cylinder and a flying turnbuckle. The flying turnbuckle moves in accordance with the oscillation device. The alternator system is a linear alternator coupled between the two oscillation devices by a slotted connecting rod. 8 figs.

  4. An Improved Search Approach for Solving Non-Convex Mixed-Integer Non Linear Programming Problems

    NASA Astrophysics Data System (ADS)

    Sitopu, Joni Wilson; Mawengkang, Herman; Syafitri Lubis, Riri

    2018-01-01

    The nonlinear mathematical programming problem addressed in this paper has a structure characterized by a subset of variables restricted to assume discrete values, which are linear and separable from the continuous variables. The strategy of releasing nonbasic variables from their bounds, combined with the “active constraint” method, has been developed. This strategy is used to force the appropriate non-integer basic variables to move to their neighbourhood integer points. Successful implementation of these algorithms was achieved on various test problems.

  5. Using directed information for influence discovery in interconnected dynamical systems

    NASA Astrophysics Data System (ADS)

    Rao, Arvind; Hero, Alfred O.; States, David J.; Engel, James Douglas

    2008-08-01

    Structure discovery in non-linear dynamical systems is an important and challenging problem that arises in various applications such as computational neuroscience, econometrics, and biological network discovery. Each of these systems have multiple interacting variables and the key problem is the inference of the underlying structure of the systems (which variables are connected to which others) based on the output observations (such as multiple time trajectories of the variables). Since such applications demand the inference of directed relationships among variables in these non-linear systems, current methods that have a linear assumption on structure or yield undirected variable dependencies are insufficient. Hence, in this work, we present a methodology for structure discovery using an information-theoretic metric called directed time information (DTI). Using both synthetic dynamical systems as well as true biological datasets (kidney development and T-cell data), we demonstrate the utility of DTI in such problems.

  6. Development of a new linearly variable edge filter (LVEF)-based compact slit-less mini-spectrometer

    NASA Astrophysics Data System (ADS)

    Mahmoud, Khaled; Park, Seongchong; Lee, Dong-Hoon

    2018-02-01

    This paper presents the development of a compact charge-coupled detector (CCD) spectrometer. We describe the design, concept and characterization of VNIR linear variable edge filter (LVEF)- based mini-spectrometer. The new instrument has been realized for operation in the 300 nm to 850 nm wavelength range. The instrument consists of a linear variable edge filter in front of CCD array. Low-size, light-weight and low-cost could be achieved using the linearly variable filters with no need to use any moving parts for wavelength selection as in the case of commercial spectrometers available in the market. This overview discusses the main components characteristics, the main concept with the main advantages and limitations reported. Experimental characteristics of the LVEFs are described. The mathematical approach to get the position-dependent slit function of the presented prototype spectrometer and its numerical de-convolution solution for a spectrum reconstruction is described. The performance of our prototype instrument is demonstrated by measuring the spectrum of a reference light source.

  7. Causal relationship model between variables using linear regression to improve professional commitment of lecturer

    NASA Astrophysics Data System (ADS)

    Setyaningsih, S.

    2017-01-01

    The main element to build a leading university requires lecturer commitment in a professional manner. Commitment is measured through willpower, loyalty, pride, loyalty, and integrity as a professional lecturer. A total of 135 from 337 university lecturers were sampled to collect data. Data were analyzed using validity and reliability test and multiple linear regression. Many studies have found a link on the commitment of lecturers, but the basic cause of the causal relationship is generally neglected. These results indicate that the professional commitment of lecturers affected by variables empowerment, academic culture, and trust. The relationship model between variables is composed of three substructures. The first substructure consists of endogenous variables professional commitment and exogenous three variables, namely the academic culture, empowerment and trust, as well as residue variable ɛ y . The second substructure consists of one endogenous variable that is trust and two exogenous variables, namely empowerment and academic culture and the residue variable ɛ 3. The third substructure consists of one endogenous variable, namely the academic culture and exogenous variables, namely empowerment as well as residue variable ɛ 2. Multiple linear regression was used in the path model for each substructure. The results showed that the hypothesis has been proved and these findings provide empirical evidence that increasing the variables will have an impact on increasing the professional commitment of the lecturers.

  8. Sparse 4D TomoSAR imaging in the presence of non-linear deformation

    NASA Astrophysics Data System (ADS)

    Khwaja, Ahmed Shaharyar; ćetin, Müjdat

    2018-04-01

    In this paper, we present a sparse four-dimensional tomographic synthetic aperture radar (4D TomoSAR) imaging scheme that can estimate elevation and linear as well as non-linear seasonal deformation rates of scatterers using the interferometric phase. Unlike existing sparse processing techniques that use fixed dictionaries based on a linear deformation model, we use a variable dictionary for the non-linear deformation in the form of seasonal sinusoidal deformation, in addition to the fixed dictionary for the linear deformation. We estimate the amplitude of the sinusoidal deformation using an optimization method and create the variable dictionary using the estimated amplitude. We show preliminary results using simulated data that demonstrate the soundness of our proposed technique for sparse 4D TomoSAR imaging in the presence of non-linear deformation.

  9. Heart rate variability based on risk stratification for type 2 diabetes mellitus.

    PubMed

    Silva-E-Oliveira, Julia; Amélio, Pâmela Marina; Abranches, Isabela Lopes Laguardia; Damasceno, Dênis Derly; Furtado, Fabianne

    2017-01-01

    To evaluate heart rate variability among adults with different risk levels for type 2 diabetes mellitus. The risk for type 2 diabetes mellitus was assessed in 130 participants (89 females) based on the questionnaire Finnish Diabetes Risk Score and was classified as low risk (n=26), slightly elevated risk (n=41), moderate risk (n=27) and high risk (n=32). To measure heart rate variability, a heart-rate monitor Polar S810i® was employed to obtain RR series for each individual, at rest, for 5 minutes, followed by analysis of linear and nonlinear indexes. The groups at higher risk of type 2 diabetes mellitus had significantly lower linear and nonlinear heart rate variability indexes. The individuals at high risk for type 2 diabetes mellitus have lower heart rate variability. Avaliar a variabilidade da frequência cardíaca em adultos com diferentes níveis de risco para diabetes mellitus tipo 2. O grau de risco para diabetes mellitus tipo 2 de 130 participantes (41 homens) foi avaliado pelo questionário Finnish Diabetes Risk Score. Os participantes foram classificados em baixo risco (n=26), risco levemente elevado (n=41), risco moderado (n=27) e alto risco (n=32). Para medir a variabilidade da frequência cardíaca, utilizou-se o frequencímetro Polar S810i® para obter séries de intervalo RR para cada indivíduo, em repouso, durante 5 minutos; posteriormente, realizou-se análise por meio de índices lineares e não-lineares. O grupo com maior risco para diabetes mellitus tipo 2 teve uma diminuição significante nos índices lineares e não-lineares da variabilidade da frequência cardíaca. Os resultados apontam que indivíduos com risco alto para diabetes mellitus tipo 2 tem menor variabilidade da frequência cardíaca. To evaluate heart rate variability among adults with different risk levels for type 2 diabetes mellitus. The risk for type 2 diabetes mellitus was assessed in 130 participants (89 females) based on the questionnaire Finnish Diabetes Risk Score and was classified as low risk (n=26), slightly elevated risk (n=41), moderate risk (n=27) and high risk (n=32). To measure heart rate variability, a heart-rate monitor Polar S810i® was employed to obtain RR series for each individual, at rest, for 5 minutes, followed by analysis of linear and nonlinear indexes. The groups at higher risk of type 2 diabetes mellitus had significantly lower linear and nonlinear heart rate variability indexes. The individuals at high risk for type 2 diabetes mellitus have lower heart rate variability.

  10. [From clinical judgment to linear regression model.

    PubMed

    Palacios-Cruz, Lino; Pérez, Marcela; Rivas-Ruiz, Rodolfo; Talavera, Juan O

    2013-01-01

    When we think about mathematical models, such as linear regression model, we think that these terms are only used by those engaged in research, a notion that is far from the truth. Legendre described the first mathematical model in 1805, and Galton introduced the formal term in 1886. Linear regression is one of the most commonly used regression models in clinical practice. It is useful to predict or show the relationship between two or more variables as long as the dependent variable is quantitative and has normal distribution. Stated in another way, the regression is used to predict a measure based on the knowledge of at least one other variable. Linear regression has as it's first objective to determine the slope or inclination of the regression line: Y = a + bx, where "a" is the intercept or regression constant and it is equivalent to "Y" value when "X" equals 0 and "b" (also called slope) indicates the increase or decrease that occurs when the variable "x" increases or decreases in one unit. In the regression line, "b" is called regression coefficient. The coefficient of determination (R 2 ) indicates the importance of independent variables in the outcome.

  11. Linear solvation energy relationships: "rule of thumb" for estimation of variable values

    USGS Publications Warehouse

    Hickey, James P.; Passino-Reader, Dora R.

    1991-01-01

    For the linear solvation energy relationship (LSER), values are listed for each of the variables (Vi/100, π*, &betam, αm) for fundamental organic structures and functional groups. We give the guidelines to estimate LSER variable values quickly for a vast array of possible organic compounds such as those found in the environment. The difficulty in generating these variables has greatly discouraged the application of this quantitative structure-activity relationship (QSAR) method. This paper present the first compilation of molecular functional group values together with a utilitarian set of the LSER variable estimation rules. The availability of these variable values and rules should facilitate widespread application of LSER for hazard evaluation of environmental contaminants.

  12. Algebraic Functions of H-Functions with Specific Dependency Structure.

    DTIC Science & Technology

    1984-05-01

    a study of its characteristic function. Such analysis is reproduced in books by Springer (17), Anderson (23), Feller (34,35), Mood and Graybill (52...following linearity property for expectations of jointly distributed random variables is derived. r 1 Theorem 1.1: If X and Y are real random variables...appear in American Journal of Mathematical and Management Science. 13. Mathai, A.M., and R.K. Saxena, "On linear combinations of stochastic variables

  13. Semi-Supervised Sparse Representation Based Classification for Face Recognition With Insufficient Labeled Samples

    NASA Astrophysics Data System (ADS)

    Gao, Yuan; Ma, Jiayi; Yuille, Alan L.

    2017-05-01

    This paper addresses the problem of face recognition when there is only few, or even only a single, labeled examples of the face that we wish to recognize. Moreover, these examples are typically corrupted by nuisance variables, both linear (i.e., additive nuisance variables such as bad lighting, wearing of glasses) and non-linear (i.e., non-additive pixel-wise nuisance variables such as expression changes). The small number of labeled examples means that it is hard to remove these nuisance variables between the training and testing faces to obtain good recognition performance. To address the problem we propose a method called Semi-Supervised Sparse Representation based Classification (S$^3$RC). This is based on recent work on sparsity where faces are represented in terms of two dictionaries: a gallery dictionary consisting of one or more examples of each person, and a variation dictionary representing linear nuisance variables (e.g., different lighting conditions, different glasses). The main idea is that (i) we use the variation dictionary to characterize the linear nuisance variables via the sparsity framework, then (ii) prototype face images are estimated as a gallery dictionary via a Gaussian Mixture Model (GMM), with mixed labeled and unlabeled samples in a semi-supervised manner, to deal with the non-linear nuisance variations between labeled and unlabeled samples. We have done experiments with insufficient labeled samples, even when there is only a single labeled sample per person. Our results on the AR, Multi-PIE, CAS-PEAL, and LFW databases demonstrate that the proposed method is able to deliver significantly improved performance over existing methods.

  14. Suppression of chaos at slow variables by rapidly mixing fast dynamics through linear energy-preserving coupling

    NASA Astrophysics Data System (ADS)

    Abramov, R. V.

    2011-12-01

    Chaotic multiscale dynamical systems are common in many areas of science, one of the examples being the interaction of the low-frequency dynamics in the atmosphere with the fast turbulent weather dynamics. One of the key questions about chaotic multiscale systems is how the fast dynamics affects chaos at the slow variables, and, therefore, impacts uncertainty and predictability of the slow dynamics. Here we demonstrate that the linear slow-fast coupling with the total energy conservation property promotes the suppression of chaos at the slow variables through the rapid mixing at the fast variables, both theoretically and through numerical simulations. A suitable mathematical framework is developed, connecting the slow dynamics on the tangent subspaces to the infinite-time linear response of the mean state to a constant external forcing at the fast variables. Additionally, it is shown that the uncoupled dynamics for the slow variables may remain chaotic while the complete multiscale system loses chaos and becomes completely predictable at the slow variables through increasing chaos and turbulence at the fast variables. This result contradicts the common sense intuition, where, naturally, one would think that coupling a slow weakly chaotic system with another much faster and much stronger chaotic system would result in general increase of chaos at the slow variables.

  15. Performance of sand and shredded rubber tire mixture as a natural base isolator for earthquake protection

    NASA Astrophysics Data System (ADS)

    Bandyopadhyay, Srijit; Sengupta, Aniruddha; Reddy, G. R.

    2015-12-01

    The performance of a well-designed layer of sand, and composites like layer of sand mixed with shredded rubber tire (RSM) as low cost base isolators, is studied in shake table tests in the laboratory. The building foundation is modeled by a 200 mm by 200 mm and 40 mm thick rigid plexi-glass block. The block is placed in the middle of a 1m by 1m tank filled with sand. The selected base isolator is placed between the block and the sand foundation. Accelerometers are placed on top of the footing and foundation sand layer. The displacement of the footing is also measured by LVDT. The whole setup is mounted on a shake table and subjected to sinusoidal motions with varying amplitude and frequency. Sand is found to be effective only at very high amplitude (> 0.65 g) of motions. The performance of a composite consisting of sand and 50% shredded rubber tire placed under the footing is found to be most promising as a low-cost effective base isolator.

  16. A Comprehensive Analytical Model of Rotorcraft Aerodynamics and Dynamics. Part 1. Analysis Development

    DTIC Science & Technology

    1980-06-01

    sufficient. Dropping the time lag terms, the equations for Xu, Xx’, and X reduce to linear algebraic equations.Y Hence in the quasistatic case the...quasistatic variables now are not described by differential equations but rather by linear algebraic equations. The solution for x0 then is simply -365...matrices for two-bladed rotor 414 7. LINEAR SYSTEM ANALYSIS 425 7,1 State Variable Form 425 7.2 Constant Coefficient System 426 7.2. 1 Eigen-analysis 426

  17. The Multifaceted Variable Approach: Selection of Method in Solving Simple Linear Equations

    ERIC Educational Resources Information Center

    Tahir, Salma; Cavanagh, Michael

    2010-01-01

    This paper presents a comparison of the solution strategies used by two groups of Year 8 students as they solved linear equations. The experimental group studied algebra following a multifaceted variable approach, while the comparison group used a traditional approach. Students in the experimental group employed different solution strategies,…

  18. Stability of Nonlinear Principal Components Analysis: An Empirical Study Using the Balanced Bootstrap

    ERIC Educational Resources Information Center

    Linting, Marielle; Meulman, Jacqueline J.; Groenen, Patrick J. F.; van der Kooij, Anita J.

    2007-01-01

    Principal components analysis (PCA) is used to explore the structure of data sets containing linearly related numeric variables. Alternatively, nonlinear PCA can handle possibly nonlinearly related numeric as well as nonnumeric variables. For linear PCA, the stability of its solution can be established under the assumption of multivariate…

  19. Conjoint Analysis: A Study of the Effects of Using Person Variables.

    ERIC Educational Resources Information Center

    Fraas, John W.; Newman, Isadore

    Three statistical techniques--conjoint analysis, a multiple linear regression model, and a multiple linear regression model with a surrogate person variable--were used to estimate the relative importance of five university attributes for students in the process of selecting a college. The five attributes include: availability and variety of…

  20. Graphical Description of Johnson-Neyman Outcomes for Linear and Quadratic Regression Surfaces.

    ERIC Educational Resources Information Center

    Schafer, William D.; Wang, Yuh-Yin

    A modification of the usual graphical representation of heterogeneous regressions is described that can aid in interpreting significant regions for linear or quadratic surfaces. The standard Johnson-Neyman graph is a bivariate plot with the criterion variable on the ordinate and the predictor variable on the abscissa. Regression surfaces are drawn…

  1. Quantum error correction of continuous-variable states against Gaussian noise

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ralph, T. C.

    2011-08-15

    We describe a continuous-variable error correction protocol that can correct the Gaussian noise induced by linear loss on Gaussian states. The protocol can be implemented using linear optics and photon counting. We explore the theoretical bounds of the protocol as well as the expected performance given current knowledge and technology.

  2. Common pitfalls in statistical analysis: Linear regression analysis

    PubMed Central

    Aggarwal, Rakesh; Ranganathan, Priya

    2017-01-01

    In a previous article in this series, we explained correlation analysis which describes the strength of relationship between two continuous variables. In this article, we deal with linear regression analysis which predicts the value of one continuous variable from another. We also discuss the assumptions and pitfalls associated with this analysis. PMID:28447022

  3. A new chaotic oscillator with free control

    NASA Astrophysics Data System (ADS)

    Li, Chunbiao; Sprott, Julien Clinton; Akgul, Akif; Iu, Herbert H. C.; Zhao, Yibo

    2017-08-01

    A novel chaotic system is explored in which all terms are quadratic except for a linear function. The slope of the linear function rescales the amplitude and frequency of the variables linearly while its zero intercept allows offset boosting for one of the variables. Therefore, a free-controlled chaotic oscillation can be obtained with any desired amplitude, frequency, and offset by an easy modification of the linear function. When implemented as an electronic circuit, the corresponding chaotic signal can be controlled by two independent potentiometers, which is convenient for constructing a chaos-based application system. To the best of our knowledge, this class of chaotic oscillators has never been reported.

  4. Variance approach for multi-objective linear programming with fuzzy random of objective function coefficients

    NASA Astrophysics Data System (ADS)

    Indarsih, Indrati, Ch. Rini

    2016-02-01

    In this paper, we define variance of the fuzzy random variables through alpha level. We have a theorem that can be used to know that the variance of fuzzy random variables is a fuzzy number. We have a multi-objective linear programming (MOLP) with fuzzy random of objective function coefficients. We will solve the problem by variance approach. The approach transform the MOLP with fuzzy random of objective function coefficients into MOLP with fuzzy of objective function coefficients. By weighted methods, we have linear programming with fuzzy coefficients and we solve by simplex method for fuzzy linear programming.

  5. Estimating linear effects in ANOVA designs: the easy way.

    PubMed

    Pinhas, Michal; Tzelgov, Joseph; Ganor-Stern, Dana

    2012-09-01

    Research in cognitive science has documented numerous phenomena that are approximated by linear relationships. In the domain of numerical cognition, the use of linear regression for estimating linear effects (e.g., distance and SNARC effects) became common following Fias, Brysbaert, Geypens, and d'Ydewalle's (1996) study on the SNARC effect. While their work has become the model for analyzing linear effects in the field, it requires statistical analysis of individual participants and does not provide measures of the proportions of variability accounted for (cf. Lorch & Myers, 1990). In the present methodological note, using both the distance and SNARC effects as examples, we demonstrate how linear effects can be estimated in a simple way within the framework of repeated measures analysis of variance. This method allows for estimating effect sizes in terms of both slope and proportions of variability accounted for. Finally, we show that our method can easily be extended to estimate linear interaction effects, not just linear effects calculated as main effects.

  6. Estimating PM2.5 Concentrations in Xi'an City Using a Generalized Additive Model with Multi-Source Monitoring Data

    PubMed Central

    Song, Yong-Ze; Yang, Hong-Lei; Peng, Jun-Huan; Song, Yi-Rong; Sun, Qian; Li, Yuan

    2015-01-01

    Particulate matter with an aerodynamic diameter <2.5 μm (PM2.5) represents a severe environmental problem and is of negative impact on human health. Xi'an City, with a population of 6.5 million, is among the highest concentrations of PM2.5 in China. In 2013, in total, there were 191 days in Xi’an City on which PM2.5 concentrations were greater than 100 μg/m3. Recently, a few studies have explored the potential causes of high PM2.5 concentration using remote sensing data such as the MODIS aerosol optical thickness (AOT) product. Linear regression is a commonly used method to find statistical relationships among PM2.5 concentrations and other pollutants, including CO, NO2, SO2, and O3, which can be indicative of emission sources. The relationships of these variables, however, are usually complicated and non-linear. Therefore, a generalized additive model (GAM) is used to estimate the statistical relationships between potential variables and PM2.5 concentrations. This model contains linear functions of SO2 and CO, univariate smoothing non-linear functions of NO2, O3, AOT and temperature, and bivariate smoothing non-linear functions of location and wind variables. The model can explain 69.50% of PM2.5 concentrations, with R2 = 0.691, which improves the result of a stepwise linear regression (R2 = 0.582) by 18.73%. The two most significant variables, CO concentration and AOT, represent 20.65% and 19.54% of the deviance, respectively, while the three other gas-phase concentrations, SO2, NO2, and O3 account for 10.88% of the total deviance. These results show that in Xi'an City, the traffic and other industrial emissions are the primary source of PM2.5. Temperature, location, and wind variables also non-linearly related with PM2.5. PMID:26540446

  7. Linear Modeling and Evaluation of Controls on Flow Response in Western Post-Fire Watersheds

    NASA Astrophysics Data System (ADS)

    Saxe, S.; Hogue, T. S.; Hay, L.

    2015-12-01

    This research investigates the impact of wildfires on watershed flow regimes throughout the western United States, specifically focusing on evaluation of fire events within specified subregions and determination of the impact of climate and geophysical variables in post-fire flow response. Fire events were collected through federal and state-level databases and streamflow data were collected from U.S. Geological Survey stream gages. 263 watersheds were identified with at least 10 years of continuous pre-fire daily streamflow records and 5 years of continuous post-fire daily flow records. For each watershed, percent changes in runoff ratio (RO), annual seven day low-flows (7Q2) and annual seven day high-flows (7Q10) were calculated from pre- to post-fire. Numerous independent variables were identified for each watershed and fire event, including topographic, land cover, climate, burn severity, and soils data. The national watersheds were divided into five regions through K-clustering and a lasso linear regression model, applying the Leave-One-Out calibration method, was calculated for each region. Nash-Sutcliffe Efficiency (NSE) was used to determine the accuracy of the resulting models. The regions encompassing the United States along and west of the Rocky Mountains, excluding the coastal watersheds, produced the most accurate linear models. The Pacific coast region models produced poor and inconsistent results, indicating that the regions need to be further subdivided. Presently, RO and HF response variables appear to be more easily modeled than LF. Results of linear regression modeling showed varying importance of watershed and fire event variables, with conflicting correlation between land cover types and soil types by region. The addition of further independent variables and constriction of current variables based on correlation indicators is ongoing and should allow for more accurate linear regression modeling.

  8. Kinetics of human aging: I. Rates of senescence between ages 30 and 70 years in healthy people.

    PubMed

    Sehl, M E; Yates, F E

    2001-05-01

    A calculation of loss rates is reported for human structural and functional variables from a substantially larger data set than has been previously studied. Data were collected for healthy, nonsmoking human subjects of both sexes from a literature search of cross-sectional, longitudinal, and cross-sequential studies. The number of studies analyzed was 469, and the total number of subjects was 54,274. A linear model provided a fit of the data, for each variable, that was not significantly different from the best polynomial fit. Therefore, linear loss rates (as a percent decline per year from the reference value at age 30) were calculated for 445 variables from 13 organ systems, and additionally for 24 variables even more integrative, such as maximum oxygen consumption and exercise performance, that express effects of multiple contributing variables and systems. The frequency distribution of the 13 individual system linear loss rates (as percent loss per year) for a very healthy population has roughly a unimodal, right-skewed shape, with mean 0.65, median 0.5, and variance 0.32. (The actual underlying distribution could be a truncated Gaussian, an exponential, Poisson, gamma or some other). The linear estimates of loss rates were clustered between 0% and 2% per year for variables from most organ systems, with exceptions being the endocrine, thermoregulatory, and gastrointestinal systems, for which wider ranges (up to approximately 3% per year) of loss rates were found. We suggest that this set of linear losses over time, observed in healthy individuals between ages (approximately) 30 to 70 years, exposes the underlying kinetics of human senescence, independent of effects of substantial disease.

  9. State variable modeling of the integrated engine and aircraft dynamics

    NASA Astrophysics Data System (ADS)

    Rotaru, Constantin; Sprinţu, Iuliana

    2014-12-01

    This study explores the dynamic characteristics of the combined aircraft-engine system, based on the general theory of the state variables for linear and nonlinear systems, with details leading first to the separate formulation of the longitudinal and the lateral directional state variable models, followed by the merging of the aircraft and engine models into a single state variable model. The linearized equations were expressed in a matrix form and the engine dynamics was included in terms of variation of thrust following a deflection of the throttle. The linear model of the shaft dynamics for a two-spool jet engine was derived by extending the one-spool model. The results include the discussion of the thrust effect upon the aircraft response when the thrust force associated with the engine has a sizable moment arm with respect to the aircraft center of gravity for creating a compensating moment.

  10. [Relations between biomedical variables: mathematical analysis or linear algebra?].

    PubMed

    Hucher, M; Berlie, J; Brunet, M

    1977-01-01

    The authors, after a short reminder of one pattern's structure, stress on the possible double approach of relations uniting the variables of this pattern: use of fonctions, what is within the mathematical analysis sphere, use of linear algebra profiting by matricial calculation's development and automatiosation. They precise the respective interests on these methods, their bounds and the imperatives for utilization, according to the kind of variables, of data, and the objective for work, understanding phenomenons or helping towards decision.

  11. [Variable selection methods combined with local linear embedding theory used for optimization of near infrared spectral quantitative models].

    PubMed

    Hao, Yong; Sun, Xu-Dong; Yang, Qiang

    2012-12-01

    Variables selection strategy combined with local linear embedding (LLE) was introduced for the analysis of complex samples by using near infrared spectroscopy (NIRS). Three methods include Monte Carlo uninformation variable elimination (MCUVE), successive projections algorithm (SPA) and MCUVE connected with SPA were used for eliminating redundancy spectral variables. Partial least squares regression (PLSR) and LLE-PLSR were used for modeling complex samples. The results shown that MCUVE can both extract effective informative variables and improve the precision of models. Compared with PLSR models, LLE-PLSR models can achieve more accurate analysis results. MCUVE combined with LLE-PLSR is an effective modeling method for NIRS quantitative analysis.

  12. Non-Linear Relationship between Economic Growth and CO2 Emissions in China: An Empirical Study Based on Panel Smooth Transition Regression Models

    PubMed Central

    Wang, Zheng-Xin; Hao, Peng; Yao, Pei-Yi

    2017-01-01

    The non-linear relationship between provincial economic growth and carbon emissions is investigated by using panel smooth transition regression (PSTR) models. The research indicates that, on the condition of separately taking Gross Domestic Product per capita (GDPpc), energy structure (Es), and urbanisation level (Ul) as transition variables, three models all reject the null hypothesis of a linear relationship, i.e., a non-linear relationship exists. The results show that the three models all contain only one transition function but different numbers of location parameters. The model taking GDPpc as the transition variable has two location parameters, while the other two models separately considering Es and Ul as the transition variables both contain one location parameter. The three models applied in the study all favourably describe the non-linear relationship between economic growth and CO2 emissions in China. It also can be seen that the conversion rate of the influence of Ul on per capita CO2 emissions is significantly higher than those of GDPpc and Es on per capita CO2 emissions. PMID:29236083

  13. Element enrichment factor calculation using grain-size distribution and functional data regression.

    PubMed

    Sierra, C; Ordóñez, C; Saavedra, A; Gallego, J R

    2015-01-01

    In environmental geochemistry studies it is common practice to normalize element concentrations in order to remove the effect of grain size. Linear regression with respect to a particular grain size or conservative element is a widely used method of normalization. In this paper, the utility of functional linear regression, in which the grain-size curve is the independent variable and the concentration of pollutant the dependent variable, is analyzed and applied to detrital sediment. After implementing functional linear regression and classical linear regression models to normalize and calculate enrichment factors, we concluded that the former regression technique has some advantages over the latter. First, functional linear regression directly considers the grain-size distribution of the samples as the explanatory variable. Second, as the regression coefficients are not constant values but functions depending on the grain size, it is easier to comprehend the relationship between grain size and pollutant concentration. Third, regularization can be introduced into the model in order to establish equilibrium between reliability of the data and smoothness of the solutions. Copyright © 2014 Elsevier Ltd. All rights reserved.

  14. Non-Linear Relationship between Economic Growth and CO₂ Emissions in China: An Empirical Study Based on Panel Smooth Transition Regression Models.

    PubMed

    Wang, Zheng-Xin; Hao, Peng; Yao, Pei-Yi

    2017-12-13

    The non-linear relationship between provincial economic growth and carbon emissions is investigated by using panel smooth transition regression (PSTR) models. The research indicates that, on the condition of separately taking Gross Domestic Product per capita (GDPpc), energy structure (Es), and urbanisation level (Ul) as transition variables, three models all reject the null hypothesis of a linear relationship, i.e., a non-linear relationship exists. The results show that the three models all contain only one transition function but different numbers of location parameters. The model taking GDPpc as the transition variable has two location parameters, while the other two models separately considering Es and Ul as the transition variables both contain one location parameter. The three models applied in the study all favourably describe the non-linear relationship between economic growth and CO₂ emissions in China. It also can be seen that the conversion rate of the influence of Ul on per capita CO₂ emissions is significantly higher than those of GDPpc and Es on per capita CO₂ emissions.

  15. A method for fitting regression splines with varying polynomial order in the linear mixed model.

    PubMed

    Edwards, Lloyd J; Stewart, Paul W; MacDougall, James E; Helms, Ronald W

    2006-02-15

    The linear mixed model has become a widely used tool for longitudinal analysis of continuous variables. The use of regression splines in these models offers the analyst additional flexibility in the formulation of descriptive analyses, exploratory analyses and hypothesis-driven confirmatory analyses. We propose a method for fitting piecewise polynomial regression splines with varying polynomial order in the fixed effects and/or random effects of the linear mixed model. The polynomial segments are explicitly constrained by side conditions for continuity and some smoothness at the points where they join. By using a reparameterization of this explicitly constrained linear mixed model, an implicitly constrained linear mixed model is constructed that simplifies implementation of fixed-knot regression splines. The proposed approach is relatively simple, handles splines in one variable or multiple variables, and can be easily programmed using existing commercial software such as SAS or S-plus. The method is illustrated using two examples: an analysis of longitudinal viral load data from a study of subjects with acute HIV-1 infection and an analysis of 24-hour ambulatory blood pressure profiles.

  16. Non-Linear Approach in Kinesiology Should Be Preferred to the Linear--A Case of Basketball.

    PubMed

    Trninić, Marko; Jeličić, Mario; Papić, Vladan

    2015-07-01

    In kinesiology, medicine, biology and psychology, in which research focus is on dynamical self-organized systems, complex connections exist between variables. Non-linear nature of complex systems has been discussed and explained by the example of non-linear anthropometric predictors of performance in basketball. Previous studies interpreted relations between anthropometric features and measures of effectiveness in basketball by (a) using linear correlation models, and by (b) including all basketball athletes in the same sample of participants regardless of their playing position. In this paper the significance and character of linear and non-linear relations between simple anthropometric predictors (AP) and performance criteria consisting of situation-related measures of effectiveness (SE) in basketball were determined and evaluated. The sample of participants consisted of top-level junior basketball players divided in three groups according to their playing time (8 minutes and more per game) and playing position: guards (N = 42), forwards (N = 26) and centers (N = 40). Linear (general model) and non-linear (general model) regression models were calculated simultaneously and separately for each group. The conclusion is viable: non-linear regressions are frequently superior to linear correlations when interpreting actual association logic among research variables.

  17. A Second-Order Conditionally Linear Mixed Effects Model with Observed and Latent Variable Covariates

    ERIC Educational Resources Information Center

    Harring, Jeffrey R.; Kohli, Nidhi; Silverman, Rebecca D.; Speece, Deborah L.

    2012-01-01

    A conditionally linear mixed effects model is an appropriate framework for investigating nonlinear change in a continuous latent variable that is repeatedly measured over time. The efficacy of the model is that it allows parameters that enter the specified nonlinear time-response function to be stochastic, whereas those parameters that enter in a…

  18. Missing Data Treatments at the Second Level of Hierarchical Linear Models

    ERIC Educational Resources Information Center

    St. Clair, Suzanne W.

    2011-01-01

    The current study evaluated the performance of traditional versus modern MDTs in the estimation of fixed-effects and variance components for data missing at the second level of an hierarchical linear model (HLM) model across 24 different study conditions. Variables manipulated in the analysis included, (a) number of Level-2 variables with missing…

  19. Piecewise Linear-Linear Latent Growth Mixture Models with Unknown Knots

    ERIC Educational Resources Information Center

    Kohli, Nidhi; Harring, Jeffrey R.; Hancock, Gregory R.

    2013-01-01

    Latent growth curve models with piecewise functions are flexible and useful analytic models for investigating individual behaviors that exhibit distinct phases of development in observed variables. As an extension of this framework, this study considers a piecewise linear-linear latent growth mixture model (LGMM) for describing segmented change of…

  20. Linear variability of gait according to socioeconomic status in elderly

    PubMed Central

    2016-01-01

    Aim: To evaluate the linear variability of comfortable gait according to socioeconomic status in community-dwelling elderly. Method: For this cross-sectional observational study 63 self- functioning elderly were categorized according to the socioeconomic level on medium-low (n= 33, age 69.0 ± 5.0 years) and medium-high (n= 30, age 71.0 ± 6.0 years). Each participant was asked to perform comfortable gait speed for 3 min on an 40 meters elliptical circuit, recording in video five strides which were transformed into frames, determining the minimum foot clearance, maximum foot clearance and stride length. The intra-group linear variability was calculated by the coefficient of variation in percent. Results: The trajectory parameters variability is not different according to socioeconomic status with a 30% (range= 15-55%) for the minimum foot clearance and 6% (range= 3-8%) in maximum foot clearance. Meanwhile, the stride length consistently was more variable in the medium-low socioeconomic status for the overall sample (p= 0.004), female (p= 0.041) and male gender (p= 0.007), with values near 4% ​​(range = 2.5-5.0%) in the medium-low and 2% (range = 1.5-3.5%) in the medium-high. Conclusions: The intra-group linear variability is consistently higher and within reference parameters for stride length during comfortable gait for elderly belonging to medium-low socioeconomic status. This might be indicative of greater complexity and consequent motor adaptability. PMID:27546931

  1. Nonlinear Dynamic Models in Advanced Life Support

    NASA Technical Reports Server (NTRS)

    Jones, Harry

    2002-01-01

    To facilitate analysis, ALS systems are often assumed to be linear and time invariant, but they usually have important nonlinear and dynamic aspects. Nonlinear dynamic behavior can be caused by time varying inputs, changes in system parameters, nonlinear system functions, closed loop feedback delays, and limits on buffer storage or processing rates. Dynamic models are usually cataloged according to the number of state variables. The simplest dynamic models are linear, using only integration, multiplication, addition, and subtraction of the state variables. A general linear model with only two state variables can produce all the possible dynamic behavior of linear systems with many state variables, including stability, oscillation, or exponential growth and decay. Linear systems can be described using mathematical analysis. Nonlinear dynamics can be fully explored only by computer simulations of models. Unexpected behavior is produced by simple models having only two or three state variables with simple mathematical relations between them. Closed loop feedback delays are a major source of system instability. Exceeding limits on buffer storage or processing rates forces systems to change operating mode. Different equilibrium points may be reached from different initial conditions. Instead of one stable equilibrium point, the system may have several equilibrium points, oscillate at different frequencies, or even behave chaotically, depending on the system inputs and initial conditions. The frequency spectrum of an output oscillation may contain harmonics and the sums and differences of input frequencies, but it may also contain a stable limit cycle oscillation not related to input frequencies. We must investigate the nonlinear dynamic aspects of advanced life support systems to understand and counter undesirable behavior.

  2. Semilinear programming: applications and implementation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mohan, S.

    Semilinear programming is a method of solving optimization problems with linear constraints where the non-negativity restrictions on the variables are dropped and the objective function coefficients can take on different values depending on whether the variable is positive or negative. The simplex method for linear programming is modified in this thesis to solve general semilinear and piecewise linear programs efficiently without having to transform them into equivalent standard linear programs. Several models in widely different areas of optimization such as production smoothing, facility locations, goal programming and L/sub 1/ estimation are presented first to demonstrate the compact formulation that arisesmore » when such problems are formulated as semilinear programs. A code SLP is constructed using the semilinear programming techniques. Problems in aggregate planning and L/sub 1/ estimation are solved using SLP and equivalent linear programs using a linear programming simplex code. Comparisons of CPU times and number iterations indicate SLP to be far superior. The semilinear programming techniques are extended to piecewise linear programming in the implementation of the code PLP. Piecewise linear models in aggregate planning are solved using PLP and equivalent standard linear programs using a simple upper bounded linear programming code SUBLP.« less

  3. A Method for Modeling the Intrinsic Dynamics of Intraindividual Variability: Recovering the Parameters of Simulated Oscillators in Multi-Wave Panel Data.

    ERIC Educational Resources Information Center

    Boker, Steven M.; Nesselroade, John R.

    2002-01-01

    Examined two methods for fitting models of intrinsic dynamics to intraindividual variability data by testing these techniques' behavior in equations through simulation studies. Among the main results is the demonstration that a local linear approximation of derivatives can accurately recover the parameters of a simulated linear oscillator, with…

  4. Regression Is a Univariate General Linear Model Subsuming Other Parametric Methods as Special Cases.

    ERIC Educational Resources Information Center

    Vidal, Sherry

    Although the concept of the general linear model (GLM) has existed since the 1960s, other univariate analyses such as the t-test and the analysis of variance models have remained popular. The GLM produces an equation that minimizes the mean differences of independent variables as they are related to a dependent variable. From a computer printout…

  5. An Exploration of a Quantitative Reasoning Instructional Approach to Linear Equations in Two Variables with Community College Students

    ERIC Educational Resources Information Center

    Belue, Paul T.; Cavey, Laurie Overman; Kinzel, Margaret T.

    2017-01-01

    In this exploratory study, we examined the effects of a quantitative reasoning instructional approach to linear equations in two variables on community college students' conceptual understanding, procedural fluency, and reasoning ability. This was done in comparison to the use of a traditional procedural approach for instruction on the same topic.…

  6. Social inequality, lifestyles and health - a non-linear canonical correlation analysis based on the approach of Pierre Bourdieu.

    PubMed

    Grosse Frie, Kirstin; Janssen, Christian

    2009-01-01

    Based on the theoretical and empirical approach of Pierre Bourdieu, a multivariate non-linear method is introduced as an alternative way to analyse the complex relationships between social determinants and health. The analysis is based on face-to-face interviews with 695 randomly selected respondents aged 30 to 59. Variables regarding socio-economic status, life circumstances, lifestyles, health-related behaviour and health were chosen for the analysis. In order to determine whether the respondents can be differentiated and described based on these variables, a non-linear canonical correlation analysis (OVERALS) was performed. The results can be described on three dimensions; Eigenvalues add up to the fit of 1.444, which can be interpreted as approximately 50 % of explained variance. The three-dimensional space illustrates correspondences between variables and provides a framework for interpretation based on latent dimensions, which can be described by age, education, income and gender. Using non-linear canonical correlation analysis, health characteristics can be analysed in conjunction with socio-economic conditions and lifestyles. Based on Bourdieus theoretical approach, the complex correlations between these variables can be more substantially interpreted and presented.

  7. Variable Selection with Prior Information for Generalized Linear Models via the Prior LASSO Method.

    PubMed

    Jiang, Yuan; He, Yunxiao; Zhang, Heping

    LASSO is a popular statistical tool often used in conjunction with generalized linear models that can simultaneously select variables and estimate parameters. When there are many variables of interest, as in current biological and biomedical studies, the power of LASSO can be limited. Fortunately, so much biological and biomedical data have been collected and they may contain useful information about the importance of certain variables. This paper proposes an extension of LASSO, namely, prior LASSO (pLASSO), to incorporate that prior information into penalized generalized linear models. The goal is achieved by adding in the LASSO criterion function an additional measure of the discrepancy between the prior information and the model. For linear regression, the whole solution path of the pLASSO estimator can be found with a procedure similar to the Least Angle Regression (LARS). Asymptotic theories and simulation results show that pLASSO provides significant improvement over LASSO when the prior information is relatively accurate. When the prior information is less reliable, pLASSO shows great robustness to the misspecification. We illustrate the application of pLASSO using a real data set from a genome-wide association study.

  8. Introducing Linear Functions: An Alternative Statistical Approach

    ERIC Educational Resources Information Center

    Nolan, Caroline; Herbert, Sandra

    2015-01-01

    The introduction of linear functions is the turning point where many students decide if mathematics is useful or not. This means the role of parameters and variables in linear functions could be considered to be "threshold concepts". There is recognition that linear functions can be taught in context through the exploration of linear…

  9. Variability simulations with a steady, linearized primitive equations model

    NASA Technical Reports Server (NTRS)

    Kinter, J. L., III; Nigam, S.

    1985-01-01

    Solutions of the steady, primitive equations on a sphere, linearized about a zonally symmetric basic state are computed for the purpose of simulating monthly mean variability in the troposphere. The basic states are observed, winter monthly mean, zonal means of zontal and meridional velocities, temperatures and surface pressures computed from the 15 year NMC time series. A least squares fit to a series of Legendre polynomials is used to compute the basic states between 20 H and the equator, and the hemispheres are assumed symmetric. The model is spectral in the zonal direction, and centered differences are employed in the meridional and vertical directions. Since the model is steady and linear, the solution is obtained by inversion of a block, pente-diagonal matrix. The model simulates the climatology of the GFDL nine level, spectral general circulation model quite closely, particularly in middle latitudes above the boundary layer. This experiment is an extension of that simulation to examine variability of the steady, linear solution.

  10. Rate-compatible protograph LDPC code families with linear minimum distance

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush (Inventor); Dolinar, Jr., Samuel J (Inventor); Jones, Christopher R. (Inventor)

    2012-01-01

    Digital communication coding methods are shown, which generate certain types of low-density parity-check (LDPC) codes built from protographs. A first method creates protographs having the linear minimum distance property and comprising at least one variable node with degree less than 3. A second method creates families of protographs of different rates, all having the linear minimum distance property, and structurally identical for all rates except for a rate-dependent designation of certain variable nodes as transmitted or non-transmitted. A third method creates families of protographs of different rates, all having the linear minimum distance property, and structurally identical for all rates except for a rate-dependent designation of the status of certain variable nodes as non-transmitted or set to zero. LDPC codes built from the protographs created by these methods can simultaneously have low error floors and low iterative decoding thresholds, and families of such codes of different rates can be decoded efficiently using a common decoding architecture.

  11. Understanding climate impacts on vegetation using a spatiotemporal non-linear Granger causality framework

    NASA Astrophysics Data System (ADS)

    Papagiannopoulou, Christina; Decubber, Stijn; Miralles, Diego; Demuzere, Matthias; Dorigo, Wouter; Verhoest, Niko; Waegeman, Willem

    2017-04-01

    Satellite data provide an abundance of information about crucial climatic and environmental variables. These data - consisting of global records, spanning up to 35 years and having the form of multivariate time series with different spatial and temporal resolutions - enable the study of key climate-vegetation interactions. Although methods which are based on correlations and linear models are typically used for this purpose, their assumptions for linearity about the climate-vegetation relationships are too simplistic. Therefore, we adopt a recently proposed non-linear Granger causality analysis [1], in which we incorporate spatial information, concatenating data from neighboring pixels and training a joint model on the combined data. Experimental results based on global data sets show that considering non-linear relationships leads to a higher explained variance of past vegetation dynamics, compared to simple linear models. Our approach consists of several steps. First, we compile an extensive database [1], which includes multiple data sets for land surface temperature, near-surface air temperature, surface radiation, precipitation, snow water equivalents and surface soil moisture. Based on this database, high-level features are constructed and considered as predictors in our machine-learning framework. These high-level features include (de-trended) seasonal anomalies, lagged variables, past cumulative variables, and extreme indices, all calculated based on the raw climatic data. Second, we apply a spatiotemporal non-linear Granger causality framework - in which the linear predictive model is substituted for a non-linear machine learning algorithm - in order to assess which of these predictor variables Granger-cause vegetation dynamics at each 1° pixel. We use the de-trended anomalies of Normalized Difference Vegetation Index (NDVI) to characterize vegetation, being the target variable of our framework. Experimental results indicate that climate strongly (Granger-)causes vegetation dynamics in most regions globally. More specifically, water availability is the most dominant vegetation driver, being the dominant vegetation driver in 54% of the vegetated surface. Furthermore, our results show that precipitation and soil moisture have prolonged impacts on vegetation in semiarid regions, with up to 10% of additional explained variance on the vegetation dynamics occurring three months later. Finally, hydro-climatic extremes seem to have a remarkable impact on vegetation, since they also explain up to 10% of additional variance of vegetation in certain regions despite their infrequent occurrence. References [1] Papagiannopoulou, C., Miralles, D. G., Verhoest, N. E. C., Dorigo, W. A., and Waegeman, W.: A non-linear Granger causality framework to investigate climate-vegetation dynamics, Geosci. Model Dev. Discuss., doi:10.5194/gmd-2016-266, in review, 2016.

  12. Variable-Delay Polarization Modulators for Cryogenic Millimeter-Wave Applications

    NASA Technical Reports Server (NTRS)

    Chuss, D. T.; Eimer, J. R.; Fixsen, D. J.; Hinderks, J.; Kogut, A. J.; Lazear, J.; Mirel, P.; Switzer, E.; Voellmer, G. M.; Wollack, E. J..

    2014-01-01

    We describe the design, construction, and initial validation of the variable-delay polarization modulator (VPM) designed for the PIPER cosmic microwave background polarimeter. The VPM modulates between linear and circular polarization by introducing a variable phase delay between orthogonal linear polarizations. Each VPM has a diameter of 39 cm and is engineered to operate in a cryogenic environment (1.5 K). We describe the mechanical design and performance of the kinematic double-blade flexure and drive mechanism along with the construction of the high precision wire grid polarizers.

  13. An improved multiple linear regression and data analysis computer program package

    NASA Technical Reports Server (NTRS)

    Sidik, S. M.

    1972-01-01

    NEWRAP, an improved version of a previous multiple linear regression program called RAPIER, CREDUC, and CRSPLT, allows for a complete regression analysis including cross plots of the independent and dependent variables, correlation coefficients, regression coefficients, analysis of variance tables, t-statistics and their probability levels, rejection of independent variables, plots of residuals against the independent and dependent variables, and a canonical reduction of quadratic response functions useful in optimum seeking experimentation. A major improvement over RAPIER is that all regression calculations are done in double precision arithmetic.

  14. Old and New Ideas for Data Screening and Assumption Testing for Exploratory and Confirmatory Factor Analysis

    PubMed Central

    Flora, David B.; LaBrish, Cathy; Chalmers, R. Philip

    2011-01-01

    We provide a basic review of the data screening and assumption testing issues relevant to exploratory and confirmatory factor analysis along with practical advice for conducting analyses that are sensitive to these concerns. Historically, factor analysis was developed for explaining the relationships among many continuous test scores, which led to the expression of the common factor model as a multivariate linear regression model with observed, continuous variables serving as dependent variables, and unobserved factors as the independent, explanatory variables. Thus, we begin our paper with a review of the assumptions for the common factor model and data screening issues as they pertain to the factor analysis of continuous observed variables. In particular, we describe how principles from regression diagnostics also apply to factor analysis. Next, because modern applications of factor analysis frequently involve the analysis of the individual items from a single test or questionnaire, an important focus of this paper is the factor analysis of items. Although the traditional linear factor model is well-suited to the analysis of continuously distributed variables, commonly used item types, including Likert-type items, almost always produce dichotomous or ordered categorical variables. We describe how relationships among such items are often not well described by product-moment correlations, which has clear ramifications for the traditional linear factor analysis. An alternative, non-linear factor analysis using polychoric correlations has become more readily available to applied researchers and thus more popular. Consequently, we also review the assumptions and data-screening issues involved in this method. Throughout the paper, we demonstrate these procedures using an historic data set of nine cognitive ability variables. PMID:22403561

  15. Aspects of porosity prediction using multivariate linear regression

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Byrnes, A.P.; Wilson, M.D.

    1991-03-01

    Highly accurate multiple linear regression models have been developed for sandstones of diverse compositions. Porosity reduction or enhancement processes are controlled by the fundamental variables, Pressure (P), Temperature (T), Time (t), and Composition (X), where composition includes mineralogy, size, sorting, fluid composition, etc. The multiple linear regression equation, of which all linear porosity prediction models are subsets, takes the generalized form: Porosity = C{sub 0} + C{sub 1}(P) + C{sub 2}(T) + C{sub 3}(X) + C{sub 4}(t) + C{sub 5}(PT) + C{sub 6}(PX) + C{sub 7}(Pt) + C{sub 8}(TX) + C{sub 9}(Tt) + C{sub 10}(Xt) + C{sub 11}(PTX) + C{submore » 12}(PXt) + C{sub 13}(PTt) + C{sub 14}(TXt) + C{sub 15}(PTXt). The first four primary variables are often interactive, thus requiring terms involving two or more primary variables (the form shown implies interaction and not necessarily multiplication). The final terms used may also involve simple mathematic transforms such as log X, e{sup T}, X{sup 2}, or more complex transformations such as the Time-Temperature Index (TTI). The X term in the equation above represents a suite of compositional variable and, therefore, a fully expanded equation may include a series of terms incorporating these variables. Numerous published bivariate porosity prediction models involving P (or depth) or Tt (TTI) are effective to a degree, largely because of the high degree of colinearity between p and TTI. However, all such bivariate models ignore the unique contributions of P and Tt, as well as various X terms. These simpler models become poor predictors in regions where colinear relations change, were important variables have been ignored, or where the database does not include a sufficient range or weight distribution for the critical variables.« less

  16. General job stress: a unidimensional measure and its non-linear relations with outcome variables.

    PubMed

    Yankelevich, Maya; Broadfoot, Alison; Gillespie, Jennifer Z; Gillespie, Michael A; Guidroz, Ashley

    2012-04-01

    This article aims to examine the non-linear relations between a general measure of job stress [Stress in General (SIG)] and two outcome variables: intentions to quit and job satisfaction. In so doing, we also re-examine the factor structure of the SIG and determine that, as a two-factor scale, it obscures non-linear relations with outcomes. Thus, in this research, we not only test for non-linear relations between stress and outcome variables but also present an updated version of the SIG scale. Using two distinct samples of working adults (sample 1, N = 589; sample 2, N = 4322), results indicate that a more parsimonious eight-item SIG has better model-data fit than the 15-item two-factor SIG and that the eight-item SIG has non-linear relations with job satisfaction and intentions to quit. Specifically, the revised SIG has an inverted curvilinear J-shaped relation with job satisfaction such that job satisfaction drops precipitously after a certain level of stress; the SIG has a J-shaped curvilinear relation with intentions to quit such that turnover intentions increase exponentially after a certain level of stress. Copyright © 2011 John Wiley & Sons, Ltd.

  17. Hyperspectral and multispectral data fusion based on linear-quadratic nonnegative matrix factorization

    NASA Astrophysics Data System (ADS)

    Benhalouche, Fatima Zohra; Karoui, Moussa Sofiane; Deville, Yannick; Ouamri, Abdelaziz

    2017-04-01

    This paper proposes three multisharpening approaches to enhance the spatial resolution of urban hyperspectral remote sensing images. These approaches, related to linear-quadratic spectral unmixing techniques, use a linear-quadratic nonnegative matrix factorization (NMF) multiplicative algorithm. These methods begin by unmixing the observable high-spectral/low-spatial resolution hyperspectral and high-spatial/low-spectral resolution multispectral images. The obtained high-spectral/high-spatial resolution features are then recombined, according to the linear-quadratic mixing model, to obtain an unobservable multisharpened high-spectral/high-spatial resolution hyperspectral image. In the first designed approach, hyperspectral and multispectral variables are independently optimized, once they have been coherently initialized. These variables are alternately updated in the second designed approach. In the third approach, the considered hyperspectral and multispectral variables are jointly updated. Experiments, using synthetic and real data, are conducted to assess the efficiency, in spatial and spectral domains, of the designed approaches and of linear NMF-based approaches from the literature. Experimental results show that the designed methods globally yield very satisfactory spectral and spatial fidelities for the multisharpened hyperspectral data. They also prove that these methods significantly outperform the used literature approaches.

  18. A Revised Simplex Method for Test Construction Problems. Research Report 90-5.

    ERIC Educational Resources Information Center

    Adema, Jos J.

    Linear programming models with 0-1 variables are useful for the construction of tests from an item bank. Most solution strategies for these models start with solving the relaxed 0-1 linear programming model, allowing the 0-1 variables to take on values between 0 and 1. Then, a 0-1 solution is found by just rounding, optimal rounding, or a…

  19. A Comprehensive Meta-Analysis of Triple P-Positive Parenting Program Using Hierarchical Linear Modeling: Effectiveness and Moderating Variables

    ERIC Educational Resources Information Center

    Nowak, Christoph; Heinrichs, Nina

    2008-01-01

    A meta-analysis encompassing all studies evaluating the impact of the Triple P-Positive Parenting Program on parent and child outcome measures was conducted in an effort to identify variables that moderate the program's effectiveness. Hierarchical linear models (HLM) with three levels of data were employed to analyze effect sizes. The results (N =…

  20. Quantifying spatial and temporal variabilities of microwave brightness temperature over the U.S. Southern Great Plains

    NASA Technical Reports Server (NTRS)

    Choudhury, B. J.; Owe, M.; Ormsby, J. P.; Chang, A. T. C.; Wang, J. R.; Goward, S. N.; Golus, R. E.

    1987-01-01

    Spatial and temporal variabilities of microwave brightness temperature over the U.S. Southern Great Plains are quantified in terms of vegetation and soil wetness. The brightness temperatures (TB) are the daytime observations from April to October for five years (1979 to 1983) obtained by the Nimbus-7 Scanning Multichannel Microwave Radiometer at 6.6 GHz frequency, horizontal polarization. The spatial and temporal variabilities of vegetation are assessed using visible and near-infrared observations by the NOAA-7 Advanced Very High Resolution Radiometer (AVHRR), while an Antecedent Precipitation Index (API) model is used for soil wetness. The API model was able to account for more than 50 percent of the observed variability in TB, although linear correlations between TB and API were generally significant at the 1 percent level. The slope of the linear regression between TB and API is found to correlate linearly with an index for vegetation density derived from AVHRR data.

  1. Introduction to statistical modelling 2: categorical variables and interactions in linear regression.

    PubMed

    Lunt, Mark

    2015-07-01

    In the first article in this series we explored the use of linear regression to predict an outcome variable from a number of predictive factors. It assumed that the predictive factors were measured on an interval scale. However, this article shows how categorical variables can also be included in a linear regression model, enabling predictions to be made separately for different groups and allowing for testing the hypothesis that the outcome differs between groups. The use of interaction terms to measure whether the effect of a particular predictor variable differs between groups is also explained. An alternative approach to testing the difference between groups of the effect of a given predictor, which consists of measuring the effect in each group separately and seeing whether the statistical significance differs between the groups, is shown to be misleading. © The Author 2013. Published by Oxford University Press on behalf of the British Society for Rheumatology. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  2. A novel approach for prediction of tacrolimus blood concentration in liver transplantation patients in the intensive care unit through support vector regression.

    PubMed

    Van Looy, Stijn; Verplancke, Thierry; Benoit, Dominique; Hoste, Eric; Van Maele, Georges; De Turck, Filip; Decruyenaere, Johan

    2007-01-01

    Tacrolimus is an important immunosuppressive drug for organ transplantation patients. It has a narrow therapeutic range, toxic side effects, and a blood concentration with wide intra- and interindividual variability. Hence, it is of the utmost importance to monitor tacrolimus blood concentration, thereby ensuring clinical effect and avoiding toxic side effects. Prediction models for tacrolimus blood concentration can improve clinical care by optimizing monitoring of these concentrations, especially in the initial phase after transplantation during intensive care unit (ICU) stay. This is the first study in the ICU in which support vector machines, as a new data modeling technique, are investigated and tested in their prediction capabilities of tacrolimus blood concentration. Linear support vector regression (SVR) and nonlinear radial basis function (RBF) SVR are compared with multiple linear regression (MLR). Tacrolimus blood concentrations, together with 35 other relevant variables from 50 liver transplantation patients, were extracted from our ICU database. This resulted in a dataset of 457 blood samples, on average between 9 and 10 samples per patient, finally resulting in a database of more than 16,000 data values. Nonlinear RBF SVR, linear SVR, and MLR were performed after selection of clinically relevant input variables and model parameters. Differences between observed and predicted tacrolimus blood concentrations were calculated. Prediction accuracy of the three methods was compared after fivefold cross-validation (Friedman test and Wilcoxon signed rank analysis). Linear SVR and nonlinear RBF SVR had mean absolute differences between observed and predicted tacrolimus blood concentrations of 2.31 ng/ml (standard deviation [SD] 2.47) and 2.38 ng/ml (SD 2.49), respectively. MLR had a mean absolute difference of 2.73 ng/ml (SD 3.79). The difference between linear SVR and MLR was statistically significant (p < 0.001). RBF SVR had the advantage of requiring only 2 input variables to perform this prediction in comparison to 15 and 16 variables needed by linear SVR and MLR, respectively. This is an indication of the superior prediction capability of nonlinear SVR. Prediction of tacrolimus blood concentration with linear and nonlinear SVR was excellent, and accuracy was superior in comparison with an MLR model.

  3. Examining the influence of link function misspecification in conventional regression models for developing crash modification factors.

    PubMed

    Wu, Lingtao; Lord, Dominique

    2017-05-01

    This study further examined the use of regression models for developing crash modification factors (CMFs), specifically focusing on the misspecification in the link function. The primary objectives were to validate the accuracy of CMFs derived from the commonly used regression models (i.e., generalized linear models or GLMs with additive linear link functions) when some of the variables have nonlinear relationships and quantify the amount of bias as a function of the nonlinearity. Using the concept of artificial realistic data, various linear and nonlinear crash modification functions (CM-Functions) were assumed for three variables. Crash counts were randomly generated based on these CM-Functions. CMFs were then derived from regression models for three different scenarios. The results were compared with the assumed true values. The main findings are summarized as follows: (1) when some variables have nonlinear relationships with crash risk, the CMFs for these variables derived from the commonly used GLMs are all biased, especially around areas away from the baseline conditions (e.g., boundary areas); (2) with the increase in nonlinearity (i.e., nonlinear relationship becomes stronger), the bias becomes more significant; (3) the quality of CMFs for other variables having linear relationships can be influenced when mixed with those having nonlinear relationships, but the accuracy may still be acceptable; and (4) the misuse of the link function for one or more variables can also lead to biased estimates for other parameters. This study raised the importance of the link function when using regression models for developing CMFs. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Understanding Coupling of Global and Diffuse Solar Radiation with Climatic Variability

    NASA Astrophysics Data System (ADS)

    Hamdan, Lubna

    Global solar radiation data is very important for wide variety of applications and scientific studies. However, this data is not readily available because of the cost of measuring equipment and the tedious maintenance and calibration requirements. Wide variety of models have been introduced by researchers to estimate and/or predict the global solar radiations and its components (direct and diffuse radiation) using other readily obtainable atmospheric parameters. The goal of this research is to understand the coupling of global and diffuse solar radiation with climatic variability, by investigating the relationships between these radiations and atmospheric parameters. For this purpose, we applied multilinear regression analysis on the data of National Solar Radiation Database 1991--2010 Update. The analysis showed that the main atmospheric parameters that affect the amount of global radiation received on earth's surface are cloud cover and relative humidity. Global radiation correlates negatively with both variables. Linear models are excellent approximations for the relationship between atmospheric parameters and global radiation. A linear model with the predictors total cloud cover, relative humidity, and extraterrestrial radiation is able to explain around 98% of the variability in global radiation. For diffuse radiation, the analysis showed that the main atmospheric parameters that affect the amount received on earth's surface are cloud cover and aerosol optical depth. Diffuse radiation correlates positively with both variables. Linear models are very good approximations for the relationship between atmospheric parameters and diffuse radiation. A linear model with the predictors total cloud cover, aerosol optical depth, and extraterrestrial radiation is able to explain around 91% of the variability in diffuse radiation. Prediction analysis showed that the linear models we fitted were able to predict diffuse radiation with efficiency of test adjusted R2 values equal to 0.93, using the data of total cloud cover, aerosol optical depth, relative humidity and extraterrestrial radiation. However, for prediction purposes, using nonlinear terms or nonlinear models might enhance the prediction of diffuse radiation.

  5. Cross-conditional entropy and coherence analysis of pharmaco-EEG changes induced by alprazolam.

    PubMed

    Alonso, J F; Mañanas, M A; Romero, S; Rojas-Martínez, M; Riba, J

    2012-06-01

    Quantitative analysis of electroencephalographic signals (EEG) and their interpretation constitute a helpful tool in the assessment of the bioavailability of psychoactive drugs in the brain. Furthermore, psychotropic drug groups have typical signatures which relate biochemical mechanisms with specific EEG changes. To analyze the pharmacological effect of a dose of alprazolam on the connectivity of the brain during wakefulness by means of linear and nonlinear approaches. EEG signals were recorded after alprazolam administration in a placebo-controlled crossover clinical trial. Nonlinear couplings assessed by means of corrected cross-conditional entropy were compared to linear couplings measured with the classical magnitude squared coherence. Linear variables evidenced a statistically significant drug-induced decrease, whereas nonlinear variables showed significant increases. All changes were highly correlated to drug plasma concentrations. The spatial distribution of the observed connectivity changes clearly differed from a previous study: changes before and after the maximum drug effect were mainly observed over the anterior half of the scalp. Additionally, a new variable with very low computational cost was defined to evaluate nonlinear coupling. This is particularly interesting when all pairs of EEG channels are assessed as in this study. Results showed that alprazolam induced changes in terms of uncoupling between regions of the scalp, with opposite trends depending on the variables: decrease in linear ones and increase in nonlinear features. Maps provided consistent information about the way brain changed in terms of connectivity being definitely necessary to evaluate separately linear and nonlinear interactions.

  6. Feedback linearization based control of a variable air volume air conditioning system for cooling applications.

    PubMed

    Thosar, Archana; Patra, Amit; Bhattacharyya, Souvik

    2008-07-01

    Design of a nonlinear control system for a Variable Air Volume Air Conditioning (VAVAC) plant through feedback linearization is presented in this article. VAVAC systems attempt to reduce building energy consumption while maintaining the primary role of air conditioning. The temperature of the space is maintained at a constant level by establishing a balance between the cooling load generated in the space and the air supply delivered to meet the load. The dynamic model of a VAVAC plant is derived and formulated as a MIMO bilinear system. Feedback linearization is applied for decoupling and linearization of the nonlinear model. Simulation results for a laboratory scale plant are presented to demonstrate the potential of keeping comfort and maintaining energy optimal performance by this methodology. Results obtained with a conventional PI controller and a feedback linearizing controller are compared and the superiority of the proposed approach is clearly established.

  7. Determination of water depth with high-resolution satellite imagery over variable bottom types

    USGS Publications Warehouse

    Stumpf, Richard P.; Holderied, Kristine; Sinclair, Mark

    2003-01-01

    A standard algorithm for determining depth in clear water from passive sensors exists; but it requires tuning of five parameters and does not retrieve depths where the bottom has an extremely low albedo. To address these issues, we developed an empirical solution using a ratio of reflectances that has only two tunable parameters and can be applied to low-albedo features. The two algorithms--the standard linear transform and the new ratio transform--were compared through analysis of IKONOS satellite imagery against lidar bathymetry. The coefficients for the ratio algorithm were tuned manually to a few depths from a nautical chart, yet performed as well as the linear algorithm tuned using multiple linear regression against the lidar. Both algorithms compensate for variable bottom type and albedo (sand, pavement, algae, coral) and retrieve bathymetry in water depths of less than 10-15 m. However, the linear transform does not distinguish depths >15 m and is more subject to variability across the studied atolls. The ratio transform can, in clear water, retrieve depths in >25 m of water and shows greater stability between different areas. It also performs slightly better in scattering turbidity than the linear transform. The ratio algorithm is somewhat noisier and cannot always adequately resolve fine morphology (structures smaller than 4-5 pixels) in water depths >15-20 m. In general, the ratio transform is more robust than the linear transform.

  8. Stoichiometric Lithium Niobate (SLN) Based Linearized Electro-Optic (EO) Modulator

    DTIC Science & Technology

    2006-01-01

    AFRL-SN-RS-TR-2006-15 Final Technical Report January 2006 STOICHIOMETRIC LITHIUM NIOBATE (SLN) BASED LINEARIZED ELECTRO - OPTIC (EO...LITHIUM NIOBATE (SLN) BASED LINEARIZED ELECTRO - OPTIC (EO) MODULATOR 6. AUTHOR(S) Dr Stuart Kingsley, Dr Sri Sriram 5. FUNDING NUMBERS C...SUBJECT TERMS electro - optic modulator, linearization, directional coupler, variable coupling, optical waveguide, Mach-Zehnder, photonic link, lithium

  9. Some comparisons of complexity in dictionary-based and linear computational models.

    PubMed

    Gnecco, Giorgio; Kůrková, Věra; Sanguineti, Marcello

    2011-03-01

    Neural networks provide a more flexible approximation of functions than traditional linear regression. In the latter, one can only adjust the coefficients in linear combinations of fixed sets of functions, such as orthogonal polynomials or Hermite functions, while for neural networks, one may also adjust the parameters of the functions which are being combined. However, some useful properties of linear approximators (such as uniqueness, homogeneity, and continuity of best approximation operators) are not satisfied by neural networks. Moreover, optimization of parameters in neural networks becomes more difficult than in linear regression. Experimental results suggest that these drawbacks of neural networks are offset by substantially lower model complexity, allowing accuracy of approximation even in high-dimensional cases. We give some theoretical results comparing requirements on model complexity for two types of approximators, the traditional linear ones and so called variable-basis types, which include neural networks, radial, and kernel models. We compare upper bounds on worst-case errors in variable-basis approximation with lower bounds on such errors for any linear approximator. Using methods from nonlinear approximation and integral representations tailored to computational units, we describe some cases where neural networks outperform any linear approximator. Copyright © 2010 Elsevier Ltd. All rights reserved.

  10. User's manual for LINEAR, a FORTRAN program to derive linear aircraft models

    NASA Technical Reports Server (NTRS)

    Duke, Eugene L.; Patterson, Brian P.; Antoniewicz, Robert F.

    1987-01-01

    This report documents a FORTRAN program that provides a powerful and flexible tool for the linearization of aircraft models. The program LINEAR numerically determines a linear system model using nonlinear equations of motion and a user-supplied nonlinear aerodynamic model. The system model determined by LINEAR consists of matrices for both state and observation equations. The program has been designed to allow easy selection and definition of the state, control, and observation variables to be used in a particular model.

  11. Explicit criteria for prioritization of cataract surgery

    PubMed Central

    Ma Quintana, José; Escobar, Antonio; Bilbao, Amaia

    2006-01-01

    Background Consensus techniques have been used previously to create explicit criteria to prioritize cataract extraction; however, the appropriateness of the intervention was not included explicitly in previous studies. We developed a prioritization tool for cataract extraction according to the RAND method. Methods Criteria were developed using a modified Delphi panel judgment process. A panel of 11 ophthalmologists was assembled. Ratings were analyzed regarding the level of agreement among panelists. We studied the effect of all variables on the final panel score using general linear and logistic regression models. Priority scoring systems were developed by means of optimal scaling and general linear models. The explicit criteria developed were summarized by means of regression tree analysis. Results Eight variables were considered to create the indications. Of the 310 indications that the panel evaluated, 22.6% were considered high priority, 52.3% intermediate priority, and 25.2% low priority. Agreement was reached for 31.9% of the indications and disagreement for 0.3%. Logistic regression and general linear models showed that the preoperative visual acuity of the cataractous eye, visual function, and anticipated visual acuity postoperatively were the most influential variables. Alternative and simple scoring systems were obtained by optimal scaling and general linear models where the previous variables were also the most important. The decision tree also shows the importance of the previous variables and the appropriateness of the intervention. Conclusion Our results showed acceptable validity as an evaluation and management tool for prioritizing cataract extraction. It also provides easy algorithms for use in clinical practice. PMID:16512893

  12. Matrix completion by deep matrix factorization.

    PubMed

    Fan, Jicong; Cheng, Jieyu

    2018-02-01

    Conventional methods of matrix completion are linear methods that are not effective in handling data of nonlinear structures. Recently a few researchers attempted to incorporate nonlinear techniques into matrix completion but there still exists considerable limitations. In this paper, a novel method called deep matrix factorization (DMF) is proposed for nonlinear matrix completion. Different from conventional matrix completion methods that are based on linear latent variable models, DMF is on the basis of a nonlinear latent variable model. DMF is formulated as a deep-structure neural network, in which the inputs are the low-dimensional unknown latent variables and the outputs are the partially observed variables. In DMF, the inputs and the parameters of the multilayer neural network are simultaneously optimized to minimize the reconstruction errors for the observed entries. Then the missing entries can be readily recovered by propagating the latent variables to the output layer. DMF is compared with state-of-the-art methods of linear and nonlinear matrix completion in the tasks of toy matrix completion, image inpainting and collaborative filtering. The experimental results verify that DMF is able to provide higher matrix completion accuracy than existing methods do and DMF is applicable to large matrices. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. In-situ X-ray CT results of damage evolution in L6 ordinary chondrite meteorites

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cuadra, Jefferson A.; Hazeli, Kavan; Ramesh, K. T.

    2016-06-17

    These are slides about in-situ X-ray CT results of damage evolution in L6 ordinary chondrite meteorites. The following topics are covered: mechanical and thermal damage characterization, list of Grosvenor Mountain (GRO) meteorite samples, in-situ x-ray compression test setup, GRO-chipped reference at 0 N - existing cracks, GRO-chipped loaded at 1580 N, in-situ x-ray thermal fatigue test setup, GRO-B14 room temperature reference, GRO-B14 Cycle 47 at 200°C, GRO-B14 Cycle 47 at room temperature, conclusions from qualitative analysis, future work and next steps. Conclusions are the following: Both GRO-Chipped and GRO-B14 had existing voids and cracks within the volume. These sites withmore » existing damage were selected for CT images from mechanically and thermally loaded scans since they are prone to damage initiation. The GRO-Chipped sample was loaded to 1580 N which resulted in a 14% compressive engineering strain, calculated using LVDT. Based on the CT cross sectional images, the GRO-B14 sample at 200°C has a thermal expansion of approximately 96 μm in height (i.e. ~1.6% engineering strain).« less

  14. Damage detection and quantification in a structural model under seismic excitation using time-frequency analysis

    NASA Astrophysics Data System (ADS)

    Chan, Chun-Kai; Loh, Chin-Hsiung; Wu, Tzu-Hsiu

    2015-04-01

    In civil engineering, health monitoring and damage detection are typically carry out by using a large amount of sensors. Typically, most methods require global measurements to extract the properties of the structure. However, some sensors, like LVDT, cannot be used due to in situ limitation so that the global deformation remains unknown. An experiment is used to demonstrate the proposed algorithms: a one-story 2-bay reinforce concrete frame under weak and strong seismic excitation. In this paper signal processing techniques and nonlinear identification are used and applied to the response measurements of seismic response of reinforced concrete structures subject to different level of earthquake excitations. Both modal-based and signal-based system identification and feature extraction techniques are used to study the nonlinear inelastic response of RC frame using both input and output response data or output only measurement. From the signal-based damage identification method, which include the enhancement of time-frequency analysis of acceleration responses and the estimation of permanent deformation using directly from acceleration response data. Finally, local deformation measurement from dense optical tractor is also use to quantify the damage of the RC frame structure.

  15. Implementation of the control electronics for KMOS instrument

    NASA Astrophysics Data System (ADS)

    Hess, Hans-Joachim; Ilijevski, Ivica; Kravcar, Helmut; Richter, Josef; Rühfel, Josef; Schwab, Christoph

    2010-07-01

    The KMOS Instrument is built to be one of the second generation VLT instruments. It is a highly complex multi-object spectrograph for the near infrared. Nearly 60 cryogenic mechanisms have to be controlled. This includes 24 deployable Pick-Off arms, three filter and grating wheels as well as three focus stages and four lamps with an attenuator wheel. These mechanisms and a calibration unit are supervised by three control cabinets based on the VLT standards. To follow the rotation of the Nasmyth adaptor the cabinets are mounted into a Co-rotating structure. The presentation will highlight the requirements on the electronics control and how these are met by new technologies applying a compact and reliable signal distribution. To enable high density wiring within the given space envelope flex-rigid printed circuit board designs have been installed. In addition an electronic system that detects collisions between the moving Pick-Off arms will be presented for safe operations. The control system is designed to achieve two micron resolution as required by optomechanical and flexure constraints. Dedicated LVDT sensors are capable to identify the absolute positions of the Pick- Off arms. These contribute to a safe recovery procedure after power failure or accidental collision.

  16. Feedback linearization for control of air breathing engines

    NASA Technical Reports Server (NTRS)

    Phillips, Stephen; Mattern, Duane

    1991-01-01

    The method of feedback linearization for control of the nonlinear nozzle and compressor components of an air breathing engine is presented. This method overcomes the need for a large number of scheduling variables and operating points to accurately model highly nonlinear plants. Feedback linearization also results in linear closed loop system performance simplifying subsequent control design. Feedback linearization is used for the nonlinear partial engine model and performance is verified through simulation.

  17. Modeling workplace bullying using catastrophe theory.

    PubMed

    Escartin, J; Ceja, L; Navarro, J; Zapf, D

    2013-10-01

    Workplace bullying is defined as negative behaviors directed at organizational members or their work context that occur regularly and repeatedly over a period of time. Employees' perceptions of psychosocial safety climate, workplace bullying victimization, and workplace bullying perpetration were assessed within a sample of nearly 5,000 workers. Linear and nonlinear approaches were applied in order to model both continuous and sudden changes in workplace bullying. More specifically, the present study examines whether a nonlinear dynamical systems model (i.e., a cusp catastrophe model) is superior to the linear combination of variables for predicting the effect of psychosocial safety climate and workplace bullying victimization on workplace bullying perpetration. According to the AICc, and BIC indices, the linear regression model fits the data better than the cusp catastrophe model. The study concludes that some phenomena, especially unhealthy behaviors at work (like workplace bullying), may be better studied using linear approaches as opposed to nonlinear dynamical systems models. This can be explained through the healthy variability hypothesis, which argues that positive organizational behavior is likely to present nonlinear behavior, while a decrease in such variability may indicate the occurrence of negative behaviors at work.

  18. Power-ratio tunable dual-wavelength laser using linearly variable Fabry-Perot filter as output coupler.

    PubMed

    Wang, Xiaozhong; Wang, Zhongfa; Bu, Yikun; Chen, Lujian; Cai, Guoxiong; Huang, Wencai; Cai, Zhiping; Chen, Nan

    2016-02-01

    For a linearly variable Fabry-Perot filter, the peak transmission wavelengths change linearly with the transverse position shift of the substrate. Such a Fabry-Perot filter is designed and fabricated and used as an output coupler of a c-cut Nd:YVO4 laser experimentally in this paper to obtain a 1062 and 1083 nm dual-wavelength laser. The peak transmission wavelengths are gradually shifted from 1040.8 to 1070.8 nm. The peak transmission wavelength of the Fabry-Perot filter used as the output coupler for the dual-wavelength laser is 1068 nm and resides between 1062 and 1083 nm, which makes the transmissions of the desired dual wavelengths change in opposite slopes with the transverse shift of the filter. Consequently, powers of the two wavelengths change in opposite directions. A branch power, oppositely tunable 1062 and 1083 nm dual-wavelength laser is successfully demonstrated. Design principles of the linear variable Fabry-Perot filter used as an output coupler are discussed. Advantages of the method are summarized.

  19. Kullback-Leibler information function and the sequential selection of experiments to discriminate among several linear models. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Sidik, S. M.

    1972-01-01

    A sequential adaptive experimental design procedure for a related problem is studied. It is assumed that a finite set of potential linear models relating certain controlled variables to an observed variable is postulated, and that exactly one of these models is correct. The problem is to sequentially design most informative experiments so that the correct model equation can be determined with as little experimentation as possible. Discussion includes: structure of the linear models; prerequisite distribution theory; entropy functions and the Kullback-Leibler information function; the sequential decision procedure; and computer simulation results. An example of application is given.

  20. Correlation and agreement: overview and clarification of competing concepts and measures.

    PubMed

    Liu, Jinyuan; Tang, Wan; Chen, Guanqin; Lu, Yin; Feng, Changyong; Tu, Xin M

    2016-04-25

    Agreement and correlation are widely-used concepts that assess the association between variables. Although similar and related, they represent completely different notions of association. Assessing agreement between variables assumes that the variables measure the same construct, while correlation of variables can be assessed for variables that measure completely different constructs. This conceptual difference requires the use of different statistical methods, and when assessing agreement or correlation, the statistical method may vary depending on the distribution of the data and the interest of the investigator. For example, the Pearson correlation, a popular measure of correlation between continuous variables, is only informative when applied to variables that have linear relationships; it may be non-informative or even misleading when applied to variables that are not linearly related. Likewise, the intraclass correlation, a popular measure of agreement between continuous variables, may not provide sufficient information for investigators if the nature of poor agreement is of interest. This report reviews the concepts of agreement and correlation and discusses differences in the application of several commonly used measures.

  1. Suppression of chaos at slow variables by rapidly mixing fast dynamics

    NASA Astrophysics Data System (ADS)

    Abramov, R.

    2012-04-01

    One of the key questions about chaotic multiscale systems is how the fast dynamics affects chaos at the slow variables, and, therefore, impacts uncertainty and predictability of the slow dynamics. Here we demonstrate that the linear slow-fast coupling with the total energy conservation property promotes the suppression of chaos at the slow variables through the rapid mixing at the fast variables, both theoretically and through numerical simulations. A suitable mathematical framework is developed, connecting the slow dynamics on the tangent subspaces to the infinite-time linear response of the mean state to a constant external forcing at the fast variables. Additionally, it is shown that the uncoupled dynamics for the slow variables may remain chaotic while the complete multiscale system loses chaos and becomes completely predictable at the slow variables through increasing chaos and turbulence at the fast variables. This result contradicts the common sense intuition, where, naturally, one would think that coupling a slow weakly chaotic system with another much faster and much stronger mixing system would result in general increase of chaos at the slow variables.

  2. An efficient variable projection formulation for separable nonlinear least squares problems.

    PubMed

    Gan, Min; Li, Han-Xiong

    2014-05-01

    We consider in this paper a class of nonlinear least squares problems in which the model can be represented as a linear combination of nonlinear functions. The variable projection algorithm projects the linear parameters out of the problem, leaving the nonlinear least squares problems involving only the nonlinear parameters. To implement the variable projection algorithm more efficiently, we propose a new variable projection functional based on matrix decomposition. The advantage of the proposed formulation is that the size of the decomposed matrix may be much smaller than those of previous ones. The Levenberg-Marquardt algorithm using finite difference method is then applied to minimize the new criterion. Numerical results show that the proposed approach achieves significant reduction in computing time.

  3. Quadratic constrained mixed discrete optimization with an adiabatic quantum optimizer

    NASA Astrophysics Data System (ADS)

    Chandra, Rishabh; Jacobson, N. Tobias; Moussa, Jonathan E.; Frankel, Steven H.; Kais, Sabre

    2014-07-01

    We extend the family of problems that may be implemented on an adiabatic quantum optimizer (AQO). When a quadratic optimization problem has at least one set of discrete controls and the constraints are linear, we call this a quadratic constrained mixed discrete optimization (QCMDO) problem. QCMDO problems are NP-hard, and no efficient classical algorithm for their solution is known. Included in the class of QCMDO problems are combinatorial optimization problems constrained by a linear partial differential equation (PDE) or system of linear PDEs. An essential complication commonly encountered in solving this type of problem is that the linear constraint may introduce many intermediate continuous variables into the optimization while the computational cost grows exponentially with problem size. We resolve this difficulty by developing a constructive mapping from QCMDO to quadratic unconstrained binary optimization (QUBO) such that the size of the QUBO problem depends only on the number of discrete control variables. With a suitable embedding, taking into account the physical constraints of the realizable coupling graph, the resulting QUBO problem can be implemented on an existing AQO. The mapping itself is efficient, scaling cubically with the number of continuous variables in the general case and linearly in the PDE case if an efficient preconditioner is available.

  4. Price Strategies between a Dominant Retailer and Manufacturers

    NASA Astrophysics Data System (ADS)

    Cho, Hsun Jung; Mak, Hou Kit

    2009-08-01

    Supply chain-related game theoretical applications have been discussed for decades. This research accounts for the emergence of a dominant retailer, and the retailer Stackelberg pricing models of distribution channels. Research in the channel pricing game may use different definitions of pricing decision variables. In this research, we pay attentions to the retailer Stackelberg pricing game, and discuss the effects when choosing different decision variables. According the literature it was shown that the strategies between channel members depend critically on the form of the demand function. Two different demand forms—linear and non-linear—will be considered in our numerical example respectively. Our major finding is the outcomes are not relative to manufacturers' pricing decisions but to the retailer's pricing decision and choosing percentage margin as retailer's decision variable is the best strategy for the retailer but worst for manufacturers. The numerical results show that it is consistence between linear and non-linear demand form.

  5. Time-response shaping using output to input saturation transformation

    NASA Astrophysics Data System (ADS)

    Chambon, E.; Burlion, L.; Apkarian, P.

    2018-03-01

    For linear systems, the control law design is often performed so that the resulting closed loop meets specific frequency-domain requirements. However, in many cases, it may be observed that the obtained controller does not enforce time-domain requirements amongst which the objective of keeping a scalar output variable in a given interval. In this article, a transformation is proposed to convert prescribed bounds on an output variable into time-varying saturations on the synthesised linear scalar control law. This transformation uses some well-chosen time-varying coefficients so that the resulting time-varying saturation bounds do not overlap in the presence of disturbances. Using an anti-windup approach, it is obtained that the origin of the resulting closed loop is globally asymptotically stable and that the constrained output variable satisfies the time-domain constraints in the presence of an unknown finite-energy-bounded disturbance. An application to a linear ball and beam model is presented.

  6. Linear response theory for annealing of radiation damage in semiconductor devices

    NASA Technical Reports Server (NTRS)

    Litovchenko, Vitaly

    1988-01-01

    A theoretical study of the radiation/annealing response of MOS ICs is described. Although many experiments have been performed in this field, no comprehensive theory dealing with radiation/annealing response has been proposed. Many attempts have been made to apply linear response theory, but no theoretical foundation has been presented. The linear response theory outlined here is capable of describing a broad area of radiation/annealing response phenomena in MOS ICs, in particular, both simultaneous irradiation and annealing, as well as short- and long-term annealing, including the case when annealing is nearing completion. For the first time, a simple procedure is devised to determine the response function from experimental radiation/annealing data. In addition, this procedure enables us to study the effect of variable temperature and dose rate, effects which are of interest in spaceflight. In the past, the shift in threshold potential due to radiation/annealing has usually been assumed to depend on one variable: the time lapse between an impulse dose and the time of observation. While such a suggestion of uniformity in time is certainly true for a broad range of radiation annealing phenomena, it may not hold for some ranges of the variables of interest (temperature, dose rate, etc.). A response function is projected which is dependent on two variables: the time of observation and the time of the impulse dose. This dependence on two variables allows us to extend the theory to the treatment of a variable dose rate. Finally, the linear theory is generalized to the case in which the response is nonlinear with impulse dose, but is proportional to some impulse function of dose. A method to determine both the impulse and response functions is presented.

  7. Cortical Contribution to Linear, Non-linear and Frequency Components of Motor Variability Control during Standing.

    PubMed

    König Ignasiak, Niklas; Habermacher, Lars; Taylor, William R; Singh, Navrag B

    2017-01-01

    Motor variability is an inherent feature of all human movements and reflects the quality of functional task performance. Depending on the requirements of the motor task, the human sensory-motor system is thought to be able to flexibly govern the appropriate level of variability. However, it remains unclear which neurophysiological structures are responsible for the control of motor variability. In this study, we tested the contribution of cortical cognitive resources on the control of motor variability (in this case postural sway) using a dual-task paradigm and furthermore observed potential changes in control strategy by evaluating Ia-afferent integration (H-reflex). Twenty healthy subjects were instructed to stand relaxed on a force plate with eyes open and closed, as well as while trying to minimize sway magnitude and performing a "subtracting-sevens" cognitive task. In total 25 linear and non-linear parameters were used to evaluate postural sway, which were combined using a Principal Components procedure. Neurophysiological response of Ia-afferent reflex loop was quantified using the Hoffman reflex. In order to assess the contribution of the H-reflex on the sway outcome in the different standing conditions multiple mixed-model ANCOVAs were performed. The results suggest that subjects were unable to further minimize their sway, despite actively focusing to do so. The dual-task had a destabilizing effect on PS, which could partly (by 4%) be counter-balanced by increasing reliance on Ia-afferent information. The effect of the dual-task was larger than the protective mechanism of increasing Ia-afferent information. We, therefore, conclude that cortical structures, as compared to peripheral reflex loops, play a dominant role in the control of motor variability.

  8. Unbiased split variable selection for random survival forests using maximally selected rank statistics.

    PubMed

    Wright, Marvin N; Dankowski, Theresa; Ziegler, Andreas

    2017-04-15

    The most popular approach for analyzing survival data is the Cox regression model. The Cox model may, however, be misspecified, and its proportionality assumption may not always be fulfilled. An alternative approach for survival prediction is random forests for survival outcomes. The standard split criterion for random survival forests is the log-rank test statistic, which favors splitting variables with many possible split points. Conditional inference forests avoid this split variable selection bias. However, linear rank statistics are utilized by default in conditional inference forests to select the optimal splitting variable, which cannot detect non-linear effects in the independent variables. An alternative is to use maximally selected rank statistics for the split point selection. As in conditional inference forests, splitting variables are compared on the p-value scale. However, instead of the conditional Monte-Carlo approach used in conditional inference forests, p-value approximations are employed. We describe several p-value approximations and the implementation of the proposed random forest approach. A simulation study demonstrates that unbiased split variable selection is possible. However, there is a trade-off between unbiased split variable selection and runtime. In benchmark studies of prediction performance on simulated and real datasets, the new method performs better than random survival forests if informative dichotomous variables are combined with uninformative variables with more categories and better than conditional inference forests if non-linear covariate effects are included. In a runtime comparison, the method proves to be computationally faster than both alternatives, if a simple p-value approximation is used. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  9. Statistical methods and regression analysis of stratospheric ozone and meteorological variables in Isfahan

    NASA Astrophysics Data System (ADS)

    Hassanzadeh, S.; Hosseinibalam, F.; Omidvari, M.

    2008-04-01

    Data of seven meteorological variables (relative humidity, wet temperature, dry temperature, maximum temperature, minimum temperature, ground temperature and sun radiation time) and ozone values have been used for statistical analysis. Meteorological variables and ozone values were analyzed using both multiple linear regression and principal component methods. Data for the period 1999-2004 are analyzed jointly using both methods. For all periods, temperature dependent variables were highly correlated, but were all negatively correlated with relative humidity. Multiple regression analysis was used to fit the meteorological variables using the meteorological variables as predictors. A variable selection method based on high loading of varimax rotated principal components was used to obtain subsets of the predictor variables to be included in the linear regression model of the meteorological variables. In 1999, 2001 and 2002 one of the meteorological variables was weakly influenced predominantly by the ozone concentrations. However, the model did not predict that the meteorological variables for the year 2000 were not influenced predominantly by the ozone concentrations that point to variation in sun radiation. This could be due to other factors that were not explicitly considered in this study.

  10. A Linearized Prognostic Cloud Scheme in NASAs Goddard Earth Observing System Data Assimilation Tools

    NASA Technical Reports Server (NTRS)

    Holdaway, Daniel; Errico, Ronald M.; Gelaro, Ronald; Kim, Jong G.; Mahajan, Rahul

    2015-01-01

    A linearized prognostic cloud scheme has been developed to accompany the linearized convection scheme recently implemented in NASA's Goddard Earth Observing System data assimilation tools. The linearization, developed from the nonlinear cloud scheme, treats cloud variables prognostically so they are subject to linearized advection, diffusion, generation, and evaporation. Four linearized cloud variables are modeled, the ice and water phases of clouds generated by large-scale condensation and, separately, by detraining convection. For each species the scheme models their sources, sublimation, evaporation, and autoconversion. Large-scale, anvil and convective species of precipitation are modeled and evaporated. The cloud scheme exhibits linearity and realistic perturbation growth, except around the generation of clouds through large-scale condensation. Discontinuities and steep gradients are widely used here and severe problems occur in the calculation of cloud fraction. For data assimilation applications this poor behavior is controlled by replacing this part of the scheme with a perturbation model. For observation impacts, where efficiency is less of a concern, a filtering is developed that examines the Jacobian. The replacement scheme is only invoked if Jacobian elements or eigenvalues violate a series of tuned constants. The linearized prognostic cloud scheme is tested by comparing the linear and nonlinear perturbation trajectories for 6-, 12-, and 24-h forecast times. The tangent linear model performs well and perturbations of clouds are well captured for the lead times of interest.

  11. Autonomic cardiovascular modulation with three different anesthetic strategies during neurosurgical procedures.

    PubMed

    Guzzetti, S; Bassani, T; Latini, R; Masson, S; Barlera, S; Citerio, G; Porta, A

    2015-01-01

    Autonomic cardiovascular modulation during surgery might be affected by different anesthetic strategies. Aim of the present study was to assess autonomic control during three different anesthetic strategies in the course of neurosurgical procedures by the linear and non-linear analysis of two cardiovascular signals. Heart rate (EKG-RR intervals) and systolic arterial pressure (SAP) signals were analyzed in 93 patients during elective neurosurgical procedures at fixed points: anesthetic induction, dura mater opening, first and second hour of surgery, dura mater and skin closure. Patients were randomly assigned to three anesthetic strategies: sevoflurane+fentanyl (S-F), sevoflurane+remifentanil (S-R) and propofol+remifentanil (P-R). All the three anesthetic strategies were characterized by a reduction of RR and SAP variability. A more active autonomic sympathetic modulation, as ratio of low to high frequency spectral components of RR variability (LF/HF), was present in the P-R group vs. S-R group. This is confirmed by non-linear symbolic analysis of RR series and SAP variability analysis. In addition, an increased parasympathetic modulation was suggested by symbolic analysis of RR series during the second hour of surgery in S-F group. Despite an important reduction of cardiovascular signal variability, the analysis of RR and SAP signals were capable to detect information about autonomic control during anesthesia. Symbolic analysis (non-linear) seems to be able to highlight the differences of both the sympathetic (slow) and vagal (fast) modulation among anesthetics, while spectral analysis (linear) underlines the same differences but only in terms of balance between the two neural control systems.

  12. Application of Multiregressive Linear Models, Dynamic Kriging Models and Neural Network Models to Predictive Maintenance of Hydroelectric Power Systems

    NASA Astrophysics Data System (ADS)

    Lucifredi, A.; Mazzieri, C.; Rossi, M.

    2000-05-01

    Since the operational conditions of a hydroelectric unit can vary within a wide range, the monitoring system must be able to distinguish between the variations of the monitored variable caused by variations of the operation conditions and those due to arising and progressing of failures and misoperations. The paper aims to identify the best technique to be adopted for the monitoring system. Three different methods have been implemented and compared. Two of them use statistical techniques: the first, the linear multiple regression, expresses the monitored variable as a linear function of the process parameters (independent variables), while the second, the dynamic kriging technique, is a modified technique of multiple linear regression representing the monitored variable as a linear combination of the process variables in such a way as to minimize the variance of the estimate error. The third is based on neural networks. Tests have shown that the monitoring system based on the kriging technique is not affected by some problems common to the other two models e.g. the requirement of a large amount of data for their tuning, both for training the neural network and defining the optimum plane for the multiple regression, not only in the system starting phase but also after a trivial operation of maintenance involving the substitution of machinery components having a direct impact on the observed variable. Or, in addition, the necessity of different models to describe in a satisfactory way the different ranges of operation of the plant. The monitoring system based on the kriging statistical technique overrides the previous difficulties: it does not require a large amount of data to be tuned and is immediately operational: given two points, the third can be immediately estimated; in addition the model follows the system without adapting itself to it. The results of the experimentation performed seem to indicate that a model based on a neural network or on a linear multiple regression is not optimal, and that a different approach is necessary to reduce the amount of work during the learning phase using, when available, all the information stored during the initial phase of the plant to build the reference baseline, elaborating, if it is the case, the raw information available. A mixed approach using the kriging statistical technique and neural network techniques could optimise the result.

  13. Modeling Linguistic Variables With Regression Models: Addressing Non-Gaussian Distributions, Non-independent Observations, and Non-linear Predictors With Random Effects and Generalized Additive Models for Location, Scale, and Shape

    PubMed Central

    Coupé, Christophe

    2018-01-01

    As statistical approaches are getting increasingly used in linguistics, attention must be paid to the choice of methods and algorithms used. This is especially true since they require assumptions to be satisfied to provide valid results, and because scientific articles still often fall short of reporting whether such assumptions are met. Progress is being, however, made in various directions, one of them being the introduction of techniques able to model data that cannot be properly analyzed with simpler linear regression models. We report recent advances in statistical modeling in linguistics. We first describe linear mixed-effects regression models (LMM), which address grouping of observations, and generalized linear mixed-effects models (GLMM), which offer a family of distributions for the dependent variable. Generalized additive models (GAM) are then introduced, which allow modeling non-linear parametric or non-parametric relationships between the dependent variable and the predictors. We then highlight the possibilities offered by generalized additive models for location, scale, and shape (GAMLSS). We explain how they make it possible to go beyond common distributions, such as Gaussian or Poisson, and offer the appropriate inferential framework to account for ‘difficult’ variables such as count data with strong overdispersion. We also demonstrate how they offer interesting perspectives on data when not only the mean of the dependent variable is modeled, but also its variance, skewness, and kurtosis. As an illustration, the case of phonemic inventory size is analyzed throughout the article. For over 1,500 languages, we consider as predictors the number of speakers, the distance from Africa, an estimation of the intensity of language contact, and linguistic relationships. We discuss the use of random effects to account for genealogical relationships, the choice of appropriate distributions to model count data, and non-linear relationships. Relying on GAMLSS, we assess a range of candidate distributions, including the Sichel, Delaporte, Box-Cox Green and Cole, and Box-Cox t distributions. We find that the Box-Cox t distribution, with appropriate modeling of its parameters, best fits the conditional distribution of phonemic inventory size. We finally discuss the specificities of phoneme counts, weak effects, and how GAMLSS should be considered for other linguistic variables. PMID:29713298

  14. Modeling Linguistic Variables With Regression Models: Addressing Non-Gaussian Distributions, Non-independent Observations, and Non-linear Predictors With Random Effects and Generalized Additive Models for Location, Scale, and Shape.

    PubMed

    Coupé, Christophe

    2018-01-01

    As statistical approaches are getting increasingly used in linguistics, attention must be paid to the choice of methods and algorithms used. This is especially true since they require assumptions to be satisfied to provide valid results, and because scientific articles still often fall short of reporting whether such assumptions are met. Progress is being, however, made in various directions, one of them being the introduction of techniques able to model data that cannot be properly analyzed with simpler linear regression models. We report recent advances in statistical modeling in linguistics. We first describe linear mixed-effects regression models (LMM), which address grouping of observations, and generalized linear mixed-effects models (GLMM), which offer a family of distributions for the dependent variable. Generalized additive models (GAM) are then introduced, which allow modeling non-linear parametric or non-parametric relationships between the dependent variable and the predictors. We then highlight the possibilities offered by generalized additive models for location, scale, and shape (GAMLSS). We explain how they make it possible to go beyond common distributions, such as Gaussian or Poisson, and offer the appropriate inferential framework to account for 'difficult' variables such as count data with strong overdispersion. We also demonstrate how they offer interesting perspectives on data when not only the mean of the dependent variable is modeled, but also its variance, skewness, and kurtosis. As an illustration, the case of phonemic inventory size is analyzed throughout the article. For over 1,500 languages, we consider as predictors the number of speakers, the distance from Africa, an estimation of the intensity of language contact, and linguistic relationships. We discuss the use of random effects to account for genealogical relationships, the choice of appropriate distributions to model count data, and non-linear relationships. Relying on GAMLSS, we assess a range of candidate distributions, including the Sichel, Delaporte, Box-Cox Green and Cole, and Box-Cox t distributions. We find that the Box-Cox t distribution, with appropriate modeling of its parameters, best fits the conditional distribution of phonemic inventory size. We finally discuss the specificities of phoneme counts, weak effects, and how GAMLSS should be considered for other linguistic variables.

  15. Method and Apparatus for Separating Particles by Dielectrophoresis

    NASA Technical Reports Server (NTRS)

    Pant, Kapil (Inventor); Wang, Yi (Inventor); Bhatt, Ketan (Inventor); Prabhakarpandian, Balabhasker (Inventor)

    2014-01-01

    Particle separation apparatus separate particles and particle populations using dielectrophoretic (DEP) forces generated by one or more pairs of electrically coupled electrodes separated by a gap. Particles suspended in a fluid are separated by DEP forces generated by the at least one electrode pair at the gap as they travel over a separation zone comprising the electrode pair. Selected particles are deflected relative to the flow of incoming particles by DEP forces that are affected by controlling applied potential, gap width, and the angle linear gaps with respect to fluid flow. The gap between an electrode pair may be a single, linear gap of constant gap, a single linear gap having variable width, or a be in the form of two or more linear gaps having constant or variable gap width having different angles with respect to one another and to the flow.

  16. Derivation and definition of a linear aircraft model

    NASA Technical Reports Server (NTRS)

    Duke, Eugene L.; Antoniewicz, Robert F.; Krambeer, Keith D.

    1988-01-01

    A linear aircraft model for a rigid aircraft of constant mass flying over a flat, nonrotating earth is derived and defined. The derivation makes no assumptions of reference trajectory or vehicle symmetry. The linear system equations are derived and evaluated along a general trajectory and include both aircraft dynamics and observation variables.

  17. Computing Linear Mathematical Models Of Aircraft

    NASA Technical Reports Server (NTRS)

    Duke, Eugene L.; Antoniewicz, Robert F.; Krambeer, Keith D.

    1991-01-01

    Derivation and Definition of Linear Aircraft Model (LINEAR) computer program provides user with powerful, and flexible, standard, documented, and verified software tool for linearization of mathematical models of aerodynamics of aircraft. Intended for use in software tool to drive linear analysis of stability and design of control laws for aircraft. Capable of both extracting such linearized engine effects as net thrust, torque, and gyroscopic effects, and including these effects in linear model of system. Designed to provide easy selection of state, control, and observation variables used in particular model. Also provides flexibility of allowing alternate formulations of both state and observation equations. Written in FORTRAN.

  18. Agent based reasoning for the non-linear stochastic models of long-range memory

    NASA Astrophysics Data System (ADS)

    Kononovicius, A.; Gontis, V.

    2012-02-01

    We extend Kirman's model by introducing variable event time scale. The proposed flexible time scale is equivalent to the variable trading activity observed in financial markets. Stochastic version of the extended Kirman's agent based model is compared to the non-linear stochastic models of long-range memory in financial markets. The agent based model providing matching macroscopic description serves as a microscopic reasoning of the earlier proposed stochastic model exhibiting power law statistics.

  19. The Asian clam Corbicula fluminea as a biomonitor of trace element contamination: Accounting for different sources of variation using an hierarchical linear model

    USGS Publications Warehouse

    Shoults-Wilson, W. A.; Peterson, J.T.; Unrine, J.M.; Rickard, J.; Black, M.C.

    2009-01-01

    In the present study, specimens of the invasive clam, Corbicula fluminea, were collected above and below possible sources of potentially toxic trace elements (As, Cd, Cr, Cu, Hg, Pb, and Zn) in the Altamaha River system (Georgia, USA). Bioaccumulation of these elements was quantified, along with environmental (water and sediment) concentrations. Hierarchical linear models were used to account for variability in tissue concentrations related to environmental (site water chemistry and sediment characteristics) and individual (growth metrics) variables while identifying the strongest relations between these variables and trace element accumulation. The present study found significantly elevated concentrations of Cd, Cu, and Hg downstream of the outfall of kaolin-processing facilities, Zn downstream of a tire cording facility, and Cr downstream of both a nuclear power plant and a paper pulp mill. Models of the present study indicated that variation in trace element accumulation was linked to distance upstream from the estuary, dissolved oxygen, percentage of silt and clay in the sediment, elemental concentrations in sediment, shell length, and bivalve condition index. By explicitly modeling environmental variability, the Hierarchical linear modeling procedure allowed the identification of sites showing increased accumulation of trace elements that may have been caused by human activity. Hierarchical linear modeling is a useful tool for accounting for environmental and individual sources of variation in bioaccumulation studies. ?? 2009 SETAC.

  20. Linear relations between leaf mass per area (LMA) and seasonal climate discovered through Linear Manifold Clustering (LMC)

    NASA Astrophysics Data System (ADS)

    Kiang, N. Y.; Haralick, R. M.; Diky, A.; Kattge, J.; Su, X.

    2016-12-01

    Leaf mass per area (LMA) is a critical variable in plant carbon allocation, correlates with leaf activity traits (photosynthetic activity, respiration), and is a controller of litterfall mass and hence carbon substrate for soil biogeochemistry. Recent advances in understanding the leaf economics spectrum (LES) show that LMA has a strong correlation with leaf life span, a trait that reflects ecological strategy, whereas physiological traits that control leaf activity scale with each other when mass-normalized (Osnas et al., 2013). These functional relations help reduce the number of independent variables in quantifying leaf traits. However, LMA is an independent variable that remains a challenge to specify in dynamic global vegetation models (DGVMs), when vegetation types are classified into a limited number of plant functional types (PFTs) without clear mechanistic drivers for LMA. LMA can range orders of magnitude across plant species, as well as vary within a single plant, both vertically and seasonally. As climate relations in combination with alternative ecological strategies have yet to be well identified for LMA, we have assembled 22,000 records of LMA spanning 0.004 - 33 mg/m2 from the numerous contributors to the TRY database (Kattge et al., 2011), with observations distributed over several climate zones and plant functional categories (growth form, leaf type, phenology). We present linear relations between LMA and climate variables, including seasonal temperature, precipitation, and radiation, as derived through Linear Manifold Clustering (LMC). LMC is a stochastic search technique for identifying linear dependencies between variables in high dimensional space. We identify a set of parsimonious classes of LMA-climate groups based on a metric of minimum description to identify structure in the data set, akin to data compression. The relations in each group are compared to Köppen-Geiger climate classes, with some groups revealing continuous linear relations between what might appear to be distinct classes. We discuss these results with regard to parameterization and evaluation of DGVMs with regard to plant diversity and representing the carbon cycle.

  1. Valuation of financial models with non-linear state spaces

    NASA Astrophysics Data System (ADS)

    Webber, Nick

    2001-02-01

    A common assumption in valuation models for derivative securities is that the underlying state variables take values in a linear state space. We discuss numerical implementation issues in an interest rate model with a simple non-linear state space, formulating and comparing Monte Carlo, finite difference and lattice numerical solution methods. We conclude that, at least in low dimensional spaces, non-linear interest rate models may be viable.

  2. Biostatistics Series Module 10: Brief Overview of Multivariate Methods.

    PubMed

    Hazra, Avijit; Gogtay, Nithya

    2017-01-01

    Multivariate analysis refers to statistical techniques that simultaneously look at three or more variables in relation to the subjects under investigation with the aim of identifying or clarifying the relationships between them. These techniques have been broadly classified as dependence techniques, which explore the relationship between one or more dependent variables and their independent predictors, and interdependence techniques, that make no such distinction but treat all variables equally in a search for underlying relationships. Multiple linear regression models a situation where a single numerical dependent variable is to be predicted from multiple numerical independent variables. Logistic regression is used when the outcome variable is dichotomous in nature. The log-linear technique models count type of data and can be used to analyze cross-tabulations where more than two variables are included. Analysis of covariance is an extension of analysis of variance (ANOVA), in which an additional independent variable of interest, the covariate, is brought into the analysis. It tries to examine whether a difference persists after "controlling" for the effect of the covariate that can impact the numerical dependent variable of interest. Multivariate analysis of variance (MANOVA) is a multivariate extension of ANOVA used when multiple numerical dependent variables have to be incorporated in the analysis. Interdependence techniques are more commonly applied to psychometrics, social sciences and market research. Exploratory factor analysis and principal component analysis are related techniques that seek to extract from a larger number of metric variables, a smaller number of composite factors or components, which are linearly related to the original variables. Cluster analysis aims to identify, in a large number of cases, relatively homogeneous groups called clusters, without prior information about the groups. The calculation intensive nature of multivariate analysis has so far precluded most researchers from using these techniques routinely. The situation is now changing with wider availability, and increasing sophistication of statistical software and researchers should no longer shy away from exploring the applications of multivariate methods to real-life data sets.

  3. 2D versus 3D in the kinematic analysis of the horse at the trot.

    PubMed

    Miró, F; Santos, R; Garrido-Castro, J L; Galisteo, A M; Medina-Carnicer, R

    2009-08-01

    The handled trot of three Lusitano Purebred stallions was analyzed by using 2D and 3D kinematical analysis methods. Using the same capture and analysis system, 2D and 3D data of some linear (stride length, maximal height of the hoof trajectories) and angular (angular range of motion, inclination of bone segments) variables were obtained. A paired Student T-test was performed in order to detect statistically significant differences between data resulting from the two methodologies With respect to the angular variables, there were significant differences in scapula inclination, shoulder angle, cannon inclination and protraction-retraction angle in the forelimb variables, but none of them were statistically different in the hind limb. Differences between the two methods were found in most of the linear variables analyzed.

  4. Integrated Logistics Support Analysis of the International Space Station Alpha, Background and Summary of Mathematical Modeling and Failure Density Distributions Pertaining to Maintenance Time Dependent Parameters

    NASA Technical Reports Server (NTRS)

    Sepehry-Fard, F.; Coulthard, Maurice H.

    1995-01-01

    The process of predicting the values of maintenance time dependent variable parameters such as mean time between failures (MTBF) over time must be one that will not in turn introduce uncontrolled deviation in the results of the ILS analysis such as life cycle costs, spares calculation, etc. A minor deviation in the values of the maintenance time dependent variable parameters such as MTBF over time will have a significant impact on the logistics resources demands, International Space Station availability and maintenance support costs. There are two types of parameters in the logistics and maintenance world: a. Fixed; b. Variable Fixed parameters, such as cost per man hour, are relatively easy to predict and forecast. These parameters normally follow a linear path and they do not change randomly. However, the variable parameters subject to the study in this report such as MTBF do not follow a linear path and they normally fall within the distribution curves which are discussed in this publication. The very challenging task then becomes the utilization of statistical techniques to accurately forecast the future non-linear time dependent variable arisings and events with a high confidence level. This, in turn, shall translate in tremendous cost savings and improved availability all around.

  5. Testing concordance of instrumental variable effects in generalized linear models with application to Mendelian randomization

    PubMed Central

    Dai, James Y.; Chan, Kwun Chuen Gary; Hsu, Li

    2014-01-01

    Instrumental variable regression is one way to overcome unmeasured confounding and estimate causal effect in observational studies. Built on structural mean models, there has been considerale work recently developed for consistent estimation of causal relative risk and causal odds ratio. Such models can sometimes suffer from identification issues for weak instruments. This hampered the applicability of Mendelian randomization analysis in genetic epidemiology. When there are multiple genetic variants available as instrumental variables, and causal effect is defined in a generalized linear model in the presence of unmeasured confounders, we propose to test concordance between instrumental variable effects on the intermediate exposure and instrumental variable effects on the disease outcome, as a means to test the causal effect. We show that a class of generalized least squares estimators provide valid and consistent tests of causality. For causal effect of a continuous exposure on a dichotomous outcome in logistic models, the proposed estimators are shown to be asymptotically conservative. When the disease outcome is rare, such estimators are consistent due to the log-linear approximation of the logistic function. Optimality of such estimators relative to the well-known two-stage least squares estimator and the double-logistic structural mean model is further discussed. PMID:24863158

  6. Reasons for Hierarchical Linear Modeling: A Reminder.

    ERIC Educational Resources Information Center

    Wang, Jianjun

    1999-01-01

    Uses examples of hierarchical linear modeling (HLM) at local and national levels to illustrate proper applications of HLM and dummy variable regression. Raises cautions about the circumstances under which hierarchical data do not need HLM. (SLD)

  7. Quantiles for Finite Mixtures of Normal Distributions

    ERIC Educational Resources Information Center

    Rahman, Mezbahur; Rahman, Rumanur; Pearson, Larry M.

    2006-01-01

    Quantiles for finite mixtures of normal distributions are computed. The difference between a linear combination of independent normal random variables and a linear combination of independent normal densities is emphasized. (Contains 3 tables and 1 figure.)

  8. Sources of signal-dependent noise during isometric force production.

    PubMed

    Jones, Kelvin E; Hamilton, Antonia F; Wolpert, Daniel M

    2002-09-01

    It has been proposed that the invariant kinematics observed during goal-directed movements result from reducing the consequences of signal-dependent noise (SDN) on motor output. The purpose of this study was to investigate the presence of SDN during isometric force production and determine how central and peripheral components contribute to this feature of motor control. Peripheral and central components were distinguished experimentally by comparing voluntary contractions to those elicited by electrical stimulation of the extensor pollicis longus muscle. To determine other factors of motor-unit physiology that may contribute to SDN, a model was constructed and its output compared with the empirical data. SDN was evident in voluntary isometric contractions as a linear scaling of force variability (SD) with respect to the mean force level. However, during electrically stimulated contractions to the same force levels, the variability remained constant over the same range of mean forces. When the subjects were asked to combine voluntary with stimulation-induced contractions, the linear scaling relationship between the SD and mean force returned. The modeling results highlight that much of the basic physiological organization of the motor-unit pool, such as range of twitch amplitudes and range of recruitment thresholds, biases force output to exhibit linearly scaled SDN. This is in contrast to the square root scaling of variability with mean force present in any individual motor-unit of the pool. Orderly recruitment by twitch amplitude was a necessary condition for producing linearly scaled SDN. Surprisingly, the scaling of SDN was independent of the variability of motoneuron firing and therefore by inference, independent of presynaptic noise in the motor command. We conclude that the linear scaling of SDN during voluntary isometric contractions is a natural by-product of the organization of the motor-unit pool that does not depend on signal-dependent noise in the motor command. Synaptic noise in the motor command and common drive, which give rise to the variability and synchronization of motoneuron spiking, determine the magnitude of the force variability at a given level of mean force output.

  9. Variable Importance in Multivariate Group Comparisons.

    ERIC Educational Resources Information Center

    Huberty, Carl J.; Wisenbaker, Joseph M.

    1992-01-01

    Interpretations of relative variable importance in multivariate analysis of variance are discussed, with attention to (1) latent construct definition; (2) linear discriminant function scores; and (3) grouping variable effects. Two numerical ranking methods are proposed and compared by the bootstrap approach using two real data sets. (SLD)

  10. Adjusted variable plots for Cox's proportional hazards regression model.

    PubMed

    Hall, C B; Zeger, S L; Bandeen-Roche, K J

    1996-01-01

    Adjusted variable plots are useful in linear regression for outlier detection and for qualitative evaluation of the fit of a model. In this paper, we extend adjusted variable plots to Cox's proportional hazards model for possibly censored survival data. We propose three different plots: a risk level adjusted variable (RLAV) plot in which each observation in each risk set appears, a subject level adjusted variable (SLAV) plot in which each subject is represented by one point, and an event level adjusted variable (ELAV) plot in which the entire risk set at each failure event is represented by a single point. The latter two plots are derived from the RLAV by combining multiple points. In each point, the regression coefficient and standard error from a Cox proportional hazards regression is obtained by a simple linear regression through the origin fit to the coordinates of the pictured points. The plots are illustrated with a reanalysis of a dataset of 65 patients with multiple myeloma.

  11. Automatic control: the vertebral column of dogfish sharks behaves as a continuously variable transmission with smoothly shifting functions.

    PubMed

    Porter, Marianne E; Ewoldt, Randy H; Long, John H

    2016-09-15

    During swimming in dogfish sharks, Squalus acanthias, both the intervertebral joints and the vertebral centra undergo significant strain. To investigate this system, unique among vertebrates, we cyclically bent isolated segments of 10 vertebrae and nine joints. For the first time in the biomechanics of fish vertebral columns, we simultaneously characterized non-linear elasticity and viscosity throughout the bending oscillation, extending recently proposed techniques for large-amplitude oscillatory shear (LAOS) characterization to large-amplitude oscillatory bending (LAOB). The vertebral column segments behave as non-linear viscoelastic springs. Elastic properties dominate for all frequencies and curvatures tested, increasing as either variable increases. Non-linearities within a bending cycle are most in evidence at the highest frequency, 2.0 Hz, and curvature, 5 m -1 Viscous bending properties are greatest at low frequencies and high curvatures, with non-linear effects occurring at all frequencies and curvatures. The range of mechanical behaviors includes that of springs and brakes, with smooth transitions between them that allow for continuously variable power transmission by the vertebral column to assist in the mechanics of undulatory propulsion. © 2016. Published by The Company of Biologists Ltd.

  12. Heteroscedasticity as a Basis of Direction Dependence in Reversible Linear Regression Models.

    PubMed

    Wiedermann, Wolfgang; Artner, Richard; von Eye, Alexander

    2017-01-01

    Heteroscedasticity is a well-known issue in linear regression modeling. When heteroscedasticity is observed, researchers are advised to remedy possible model misspecification of the explanatory part of the model (e.g., considering alternative functional forms and/or omitted variables). The present contribution discusses another source of heteroscedasticity in observational data: Directional model misspecifications in the case of nonnormal variables. Directional misspecification refers to situations where alternative models are equally likely to explain the data-generating process (e.g., x → y versus y → x). It is shown that the homoscedasticity assumption is likely to be violated in models that erroneously treat true nonnormal predictors as response variables. Recently, Direction Dependence Analysis (DDA) has been proposed as a framework to empirically evaluate the direction of effects in linear models. The present study links the phenomenon of heteroscedasticity with DDA and describes visual diagnostics and nine homoscedasticity tests that can be used to make decisions concerning the direction of effects in linear models. Results of a Monte Carlo simulation that demonstrate the adequacy of the approach are presented. An empirical example is provided, and applicability of the methodology in cases of violated assumptions is discussed.

  13. Relationship of crop radiance to alfalfa agronomic values

    NASA Technical Reports Server (NTRS)

    Tucker, C. J.; Elgin, J. H., Jr.; Mcmurtrey, J. E., III

    1980-01-01

    Red and photographic infrared spectral data of alfalfa were collected at the time of the third and fourth cuttings using a hand-held radiometer for the earlier alfalfa cutting. Significant linear and non-linear correlation coefficients were found between the spectral variables and plant height, biomass, forage water content, and estimated canopy cover. For the alfalfa of the later cutting, which had experienced a period of severe drought stress which limited growth, the spectral variables were found to be highly correlated with the estimated drought scores.

  14. Mathematics for Physics

    NASA Astrophysics Data System (ADS)

    Stone, Michael; Goldbart, Paul

    2009-07-01

    Preface; 1. Calculus of variations; 2. Function spaces; 3. Linear ordinary differential equations; 4. Linear differential operators; 5. Green functions; 6. Partial differential equations; 7. The mathematics of real waves; 8. Special functions; 9. Integral equations; 10. Vectors and tensors; 11. Differential calculus on manifolds; 12. Integration on manifolds; 13. An introduction to differential topology; 14. Group and group representations; 15. Lie groups; 16. The geometry of fibre bundles; 17. Complex analysis I; 18. Applications of complex variables; 19. Special functions and complex variables; Appendixes; Reference; Index.

  15. Analog synthesized fast-variable linear load

    NASA Technical Reports Server (NTRS)

    Niedra, Janis M.

    1991-01-01

    A several kilowatt power level, fast-variable linear resistor was synthesized by using analog components to control the conductance of power MOSFETs. Risetimes observed have been as short as 500 ns with respect to the control signal and 1 to 2 microseconds with respect to the power source voltage. A variant configuration of this load that dissipates a constant power set by a control signal is indicated. Replacement of the MOSFETs by static induction transistors (SITs) to increase power handling, speed and radiation hardness is discussed.

  16. Nonferromagnetic linear variable differential transformer

    DOEpatents

    Ellis, James F.; Walstrom, Peter L.

    1977-06-14

    A nonferromagnetic linear variable differential transformer for accurately measuring mechanical displacements in the presence of high magnetic fields is provided. The device utilizes a movable primary coil inside a fixed secondary coil that consists of two series-opposed windings. Operation is such that the secondary output voltage is maintained in phase (depending on polarity) with the primary voltage. The transducer is well-suited to long cable runs and is useful for measuring small displacements in the presence of high or alternating magnetic fields.

  17. Automatic Assessment and Reduction of Noise using Edge Pattern Analysis in Non-Linear Image Enhancement

    NASA Technical Reports Server (NTRS)

    Jobson, Daniel J.; Rahman, Zia-Ur; Woodell, Glenn A.; Hines, Glenn D.

    2004-01-01

    Noise is the primary visibility limit in the process of non-linear image enhancement, and is no longer a statistically stable additive noise in the post-enhancement image. Therefore novel approaches are needed to both assess and reduce spatially variable noise at this stage in overall image processing. Here we will examine the use of edge pattern analysis both for automatic assessment of spatially variable noise and as a foundation for new noise reduction methods.

  18. Modular design attitude control system

    NASA Technical Reports Server (NTRS)

    Chichester, F. D.

    1984-01-01

    A sequence of single axismodels and a series of reduced state linear observers of minimum order are used to reconstruct inaccessible variables pertaining to the modular attitude control of a rigid body flexible suspension model of a flexible spacecraft. The single axis models consist of two, three, four, and five rigid bodies, each interconnected by a flexible shaft passing through the mass centers of the bodies. Modal damping is added to each model. Reduced state linear observers are developed for synthesizing the inaccessible modal state variables for each modal model.

  19. Characterizing the Optical Variability of Bright Blazars: Variability-based Selection of Fermi Active Galactic Nuclei

    NASA Astrophysics Data System (ADS)

    Ruan, John J.; Anderson, Scott F.; MacLeod, Chelsea L.; Becker, Andrew C.; Burnett, T. H.; Davenport, James R. A.; Ivezić, Željko; Kochanek, Christopher S.; Plotkin, Richard M.; Sesar, Branimir; Stuart, J. Scott

    2012-11-01

    We investigate the use of optical photometric variability to select and identify blazars in large-scale time-domain surveys, in part to aid in the identification of blazar counterparts to the ~30% of γ-ray sources in the Fermi 2FGL catalog still lacking reliable associations. Using data from the optical LINEAR asteroid survey, we characterize the optical variability of blazars by fitting a damped random walk model to individual light curves with two main model parameters, the characteristic timescales of variability τ, and driving amplitudes on short timescales \\hat{\\sigma }. Imposing cuts on minimum τ and \\hat{\\sigma } allows for blazar selection with high efficiency E and completeness C. To test the efficacy of this approach, we apply this method to optically variable LINEAR objects that fall within the several-arcminute error ellipses of γ-ray sources in the Fermi 2FGL catalog. Despite the extreme stellar contamination at the shallow depth of the LINEAR survey, we are able to recover previously associated optical counterparts to Fermi active galactic nuclei with E >= 88% and C = 88% in Fermi 95% confidence error ellipses having semimajor axis r < 8'. We find that the suggested radio counterpart to Fermi source 2FGL J1649.6+5238 has optical variability consistent with other γ-ray blazars and is likely to be the γ-ray source. Our results suggest that the variability of the non-thermal jet emission in blazars is stochastic in nature, with unique variability properties due to the effects of relativistic beaming. After correcting for beaming, we estimate that the characteristic timescale of blazar variability is ~3 years in the rest frame of the jet, in contrast with the ~320 day disk flux timescale observed in quasars. The variability-based selection method presented will be useful for blazar identification in time-domain optical surveys and is also a probe of jet physics.

  20. A SIGNIFICANCE TEST FOR THE LASSO1

    PubMed Central

    Lockhart, Richard; Taylor, Jonathan; Tibshirani, Ryan J.; Tibshirani, Robert

    2014-01-01

    In the sparse linear regression setting, we consider testing the significance of the predictor variable that enters the current lasso model, in the sequence of models visited along the lasso solution path. We propose a simple test statistic based on lasso fitted values, called the covariance test statistic, and show that when the true model is linear, this statistic has an Exp(1) asymptotic distribution under the null hypothesis (the null being that all truly active variables are contained in the current lasso model). Our proof of this result for the special case of the first predictor to enter the model (i.e., testing for a single significant predictor variable against the global null) requires only weak assumptions on the predictor matrix X. On the other hand, our proof for a general step in the lasso path places further technical assumptions on X and the generative model, but still allows for the important high-dimensional case p > n, and does not necessarily require that the current lasso model achieves perfect recovery of the truly active variables. Of course, for testing the significance of an additional variable between two nested linear models, one typically uses the chi-squared test, comparing the drop in residual sum of squares (RSS) to a χ12 distribution. But when this additional variable is not fixed, and has been chosen adaptively or greedily, this test is no longer appropriate: adaptivity makes the drop in RSS stochastically much larger than χ12 under the null hypothesis. Our analysis explicitly accounts for adaptivity, as it must, since the lasso builds an adaptive sequence of linear models as the tuning parameter λ decreases. In this analysis, shrinkage plays a key role: though additional variables are chosen adaptively, the coefficients of lasso active variables are shrunken due to the l1 penalty. Therefore, the test statistic (which is based on lasso fitted values) is in a sense balanced by these two opposing properties—adaptivity and shrinkage—and its null distribution is tractable and asymptotically Exp(1). PMID:25574062

  1. Cost drivers and resource allocation in military health care systems.

    PubMed

    Fulton, Larry; Lasdon, Leon S; McDaniel, Reuben R

    2007-03-01

    This study illustrates the feasibility of incorporating technical efficiency considerations in the funding of military hospitals and identifies the primary drivers for hospital costs. Secondary data collected for 24 U.S.-based Army hospitals and medical centers for the years 2001 to 2003 are the basis for this analysis. Technical efficiency was measured by using data envelopment analysis; subsequently, efficiency estimates were included in logarithmic-linear cost models that specified cost as a function of volume, complexity, efficiency, time, and facility type. These logarithmic-linear models were compared against stochastic frontier analysis models. A parsimonious, three-variable, logarithmic-linear model composed of volume, complexity, and efficiency variables exhibited a strong linear relationship with observed costs (R(2) = 0.98). This model also proved reliable in forecasting (R(2) = 0.96). Based on our analysis, as much as $120 million might be reallocated to improve the United States-based Army hospital performance evaluated in this study.

  2. Differences between measured and linearly interpolated synoptic variables over a 12-h period during AVE 4

    NASA Technical Reports Server (NTRS)

    Dupuis, L. R.; Scoggins, J. R.

    1979-01-01

    Results of analyses revealed that nonlinear changes or differences formed centers or systems, that were mesosynoptic in nature. These systems correlated well in space with upper level short waves, frontal zones, and radar observed convection, and were very systematic in time and space. Many of the centers of differences were well established in the vertical, extending up to the tropopause. Statistical analysis showed that on the average nonlinear changes were larger in convective areas than nonconvective regions. Errors often exceeding 100 percent were made by assuming variables to change linearly through a 12-h period in areas of thunderstorms, indicating that these nonlinear changes are important in the development of severe weather. Linear changes, however, accounted for more and more of an observed change as the time interval (within the 12-h interpolation period) increased, implying that the accuracy of linear interpolation increased over larger time intervals.

  3. Linear and nonlinear variable selection in competing risks data.

    PubMed

    Ren, Xiaowei; Li, Shanshan; Shen, Changyu; Yu, Zhangsheng

    2018-06-15

    Subdistribution hazard model for competing risks data has been applied extensively in clinical researches. Variable selection methods of linear effects for competing risks data have been studied in the past decade. There is no existing work on selection of potential nonlinear effects for subdistribution hazard model. We propose a two-stage procedure to select the linear and nonlinear covariate(s) simultaneously and estimate the selected covariate effect(s). We use spectral decomposition approach to distinguish the linear and nonlinear parts of each covariate and adaptive LASSO to select each of the 2 components. Extensive numerical studies are conducted to demonstrate that the proposed procedure can achieve good selection accuracy in the first stage and small estimation biases in the second stage. The proposed method is applied to analyze a cardiovascular disease data set with competing death causes. Copyright © 2018 John Wiley & Sons, Ltd.

  4. Variable volume combustor

    DOEpatents

    Ostebee, Heath Michael; Ziminsky, Willy Steve; Johnson, Thomas Edward; Keener, Christopher Paul

    2017-01-17

    The present application provides a variable volume combustor for use with a gas turbine engine. The variable volume combustor may include a liner, a number of micro-mixer fuel nozzles positioned within the liner, and a linear actuator so as to maneuver the micro-mixer fuel nozzles axially along the liner.

  5. Biomotor structures in elite female handball players.

    PubMed

    Katić, Ratko; Cavala, Marijana; Srhoj, Vatromir

    2007-09-01

    In order to identify biomotor structures in elite female handball players, factor structures of morphological characteristics and basic motor abilities of elite female handball players (N = 53) were determined first, followed by determination of relations between the morphological-motor space factors obtained and the set of criterion variables evaluating situation motor abilities in handball. Factor analysis of 14 morphological measures produced three morphological factors, i.e. factor of absolute voluminosity (mesoendomorph), factor of longitudinal skeleton dimensionality, and factor of transverse hand dimensionality. Factor analysis of 15 motor variables yielded five basic motor dimensions, i.e. factor of agility, factor of jumping explosive strength, factor of throwing explosive strength, factor of movement frequency rate, and factor of running explosive strength (sprint). Four significant canonic correlations, i.e. linear combinations, explained the correlation between the set of eight latent variables of the morphological and basic motor space and five variables of situation motoricity. First canonic linear combination is based on the positive effect of the factors of agility/coordination on the ability of fast movement without ball. Second linear combination is based on the effect of jumping explosive strength and transverse hand dimensionality on ball manipulation, throw precision, and speed of movement with ball. Third linear combination is based on the running explosive strength determination by the speed of movement with ball, whereas fourth combination is determined by throwing and jumping explosive strength, and agility on ball pass. The results obtained were consistent with the model of selection in female handball proposed (Srhoj et al., 2006), showing the speed of movement without ball and the ability of ball manipulation to be the predominant specific abilities, as indicated by the first and second linear combination.

  6. Exhaustive Search for Sparse Variable Selection in Linear Regression

    NASA Astrophysics Data System (ADS)

    Igarashi, Yasuhiko; Takenaka, Hikaru; Nakanishi-Ohno, Yoshinori; Uemura, Makoto; Ikeda, Shiro; Okada, Masato

    2018-04-01

    We propose a K-sparse exhaustive search (ES-K) method and a K-sparse approximate exhaustive search method (AES-K) for selecting variables in linear regression. With these methods, K-sparse combinations of variables are tested exhaustively assuming that the optimal combination of explanatory variables is K-sparse. By collecting the results of exhaustively computing ES-K, various approximate methods for selecting sparse variables can be summarized as density of states. With this density of states, we can compare different methods for selecting sparse variables such as relaxation and sampling. For large problems where the combinatorial explosion of explanatory variables is crucial, the AES-K method enables density of states to be effectively reconstructed by using the replica-exchange Monte Carlo method and the multiple histogram method. Applying the ES-K and AES-K methods to type Ia supernova data, we confirmed the conventional understanding in astronomy when an appropriate K is given beforehand. However, we found the difficulty to determine K from the data. Using virtual measurement and analysis, we argue that this is caused by data shortage.

  7. A test of a linear model of glaucomatous structure-function loss reveals sources of variability in retinal nerve fiber and visual field measurements.

    PubMed

    Hood, Donald C; Anderson, Susan C; Wall, Michael; Raza, Ali S; Kardon, Randy H

    2009-09-01

    Retinal nerve fiber (RNFL) thickness and visual field loss data from patients with glaucoma were analyzed in the context of a model, to better understand individual variation in structure versus function. Optical coherence tomography (OCT) RNFL thickness and standard automated perimetry (SAP) visual field loss were measured in the arcuate regions of one eye of 140 patients with glaucoma and 82 normal control subjects. An estimate of within-individual (measurement) error was obtained by repeat measures made on different days within a short period in 34 patients and 22 control subjects. A linear model, previously shown to describe the general characteristics of the structure-function data, was extended to predict the variability in the data. For normal control subjects, between-individual error (individual differences) accounted for 87% and 71% of the total variance in OCT and SAP measures, respectively. SAP within-individual error increased and then decreased with increased SAP loss, whereas OCT error remained constant. The linear model with variability (LMV) described much of the variability in the data. However, 12.5% of the patients' points fell outside the 95% boundary. An examination of these points revealed factors that can contribute to the overall variability in the data. These factors include epiretinal membranes, edema, individual variation in field-to-disc mapping, and the location of blood vessels and degree to which they are included by the RNFL algorithm. The model and the partitioning of within- versus between-individual variability helped elucidate the factors contributing to the considerable variability in the structure-versus-function data.

  8. Relationship between the Arctic oscillation and surface air temperature in multi-decadal time-scale

    NASA Astrophysics Data System (ADS)

    Tanaka, Hiroshi L.; Tamura, Mina

    2016-09-01

    In this study, a simple energy balance model (EBM) was integrated in time, considering a hypothetical long-term variability in ice-albedo feedback mimicking the observed multi-decadal temperature variability. A natural variability was superimposed on a linear warming trend due to the increasing radiative forcing of CO2. The result demonstrates that the superposition of the natural variability and the background linear trend can offset with each other to show the warming hiatus for some period. It is also stressed that the rapid warming during 1970-2000 can be explained by the superposition of the natural variability and the background linear trend at least within the simple model. The key process of the fluctuating planetary albedo in multi-decadal time scale is investigated using the JRA-55 reanalysis data. It is found that the planetary albedo increased for 1958-1970, decreased for 1970-2000, and increased for 2000-2012, as expected by the simple EBM experiments. The multi-decadal variability in the planetary albedo is compared with the time series of the AO mode and Barents Sea mode of surface air temperature. It is shown that the recent AO negative pattern showing warm Arctic and cold mid-latitudes is in good agreement with planetary albedo change indicating negative anomaly in high latitudes and positive anomaly in mid-latitudes. Moreover, the Barents Sea mode with the warm Barents Sea and cold mid-latitudes shows long-term variability similar to planetary albedo change. Although further studies are needed, the natural variabilities of both the AO mode and Barents Sea mode indicate some possible link to the planetary albedo as suggested by the simple EBM to cause the warming hiatus in recent years.

  9. Tangent linear super-parameterization: attributable, decomposable moist processes for tropical variability studies

    NASA Astrophysics Data System (ADS)

    Mapes, B. E.; Kelly, P.; Song, S.; Hu, I. K.; Kuang, Z.

    2015-12-01

    An economical 10-layer global primitive equation solver is driven by time-independent forcing terms, derived from a training process, to produce a realisting eddying basic state with a tracer q trained to act like water vapor mixing ratio. Within this basic state, linearized anomaly moist physics in the column are applied in the form of a 20x20 matrix. The control matrix was derived from the results of Kuang (2010, 2012) who fitted a linear response function from a cloud resolving model in a state of deep convecting equilibrium. By editing this matrix in physical space and eigenspace, scaling and clipping its action, and optionally adding terms for processes that do not conserve moist statice energy (radiation, surface fluxes), we can decompose and explain the model's diverse moist process coupled variability. Recitified effects of this variability on the general circulation and climate, even in strictly zero-mean centered anomaly physic cases, also are sometimes surprising.

  10. Drug awareness in adolescents attending a mental health service: analysis of longitudinal data.

    PubMed

    Arnau, Jaume; Bono, Roser; Díaz, Rosa; Goti, Javier

    2011-11-01

    One of the procedures used most recently with longitudinal data is linear mixed models. In the context of health research the increasing number of studies that now use these models bears witness to the growing interest in this type of analysis. This paper describes the application of linear mixed models to a longitudinal study of a sample of Spanish adolescents attending a mental health service, the aim being to investigate their knowledge about the consumption of alcohol and other drugs. More specifically, the main objective was to compare the efficacy of a motivational interviewing programme with a standard approach to drug awareness. The models used to analyse the overall indicator of drug awareness were as follows: (a) unconditional linear growth curve model; (b) growth model with subject-associated variables; and (c) individual curve model with predictive variables. The results showed that awareness increased over time and that the variable 'schooling years' explained part of the between-subjects variation. The effect of motivational interviewing was also significant.

  11. Ultra compact spectrometer using linear variable filters

    NASA Astrophysics Data System (ADS)

    Dami, M.; De Vidi, R.; Aroldi, G.; Belli, F.; Chicarella, L.; Piegari, A.; Sytchkova, A.; Bulir, J.; Lemarquis, F.; Lequime, M.; Abel Tibérini, L.; Harnisch, B.

    2017-11-01

    The Linearly Variable Filters (LVF) are complex optical devices that, integrated in a CCD, can realize a "single chip spectrometer". In the framework of an ESA Study, a team of industries and institutes led by SELEX-Galileo explored the design principles and manufacturing techniques, realizing and characterizing LVF samples based both on All-Dielectric (AD) and Metal-Dielectric (MD) Coating Structures in the VNIR and SWIR spectral ranges. In particular the achieved performances on spectral gradient, transmission bandwidth and Spectral Attenuation (SA) are presented and critically discussed. Potential improvements will be highlighted. In addition the results of a feasibility study of a SWIR Linear Variable Filter are presented with the comparison of design prediction and measured performances. Finally criticalities related to the filter-CCD packaging are discussed. The main achievements reached during these activities have been: - to evaluate by design, manufacturing and test of LVF samples the achievable performances compared with target requirements; - to evaluate the reliability of the projects by analyzing their repeatability; - to define suitable measurement methodologies

  12. Simple, Efficient Estimators of Treatment Effects in Randomized Trials Using Generalized Linear Models to Leverage Baseline Variables

    PubMed Central

    Rosenblum, Michael; van der Laan, Mark J.

    2010-01-01

    Models, such as logistic regression and Poisson regression models, are often used to estimate treatment effects in randomized trials. These models leverage information in variables collected before randomization, in order to obtain more precise estimates of treatment effects. However, there is the danger that model misspecification will lead to bias. We show that certain easy to compute, model-based estimators are asymptotically unbiased even when the working model used is arbitrarily misspecified. Furthermore, these estimators are locally efficient. As a special case of our main result, we consider a simple Poisson working model containing only main terms; in this case, we prove the maximum likelihood estimate of the coefficient corresponding to the treatment variable is an asymptotically unbiased estimator of the marginal log rate ratio, even when the working model is arbitrarily misspecified. This is the log-linear analog of ANCOVA for linear models. Our results demonstrate one application of targeted maximum likelihood estimation. PMID:20628636

  13. The Dropout Learning Algorithm

    PubMed Central

    Baldi, Pierre; Sadowski, Peter

    2014-01-01

    Dropout is a recently introduced algorithm for training neural network by randomly dropping units during training to prevent their co-adaptation. A mathematical analysis of some of the static and dynamic properties of dropout is provided using Bernoulli gating variables, general enough to accommodate dropout on units or connections, and with variable rates. The framework allows a complete analysis of the ensemble averaging properties of dropout in linear networks, which is useful to understand the non-linear case. The ensemble averaging properties of dropout in non-linear logistic networks result from three fundamental equations: (1) the approximation of the expectations of logistic functions by normalized geometric means, for which bounds and estimates are derived; (2) the algebraic equality between normalized geometric means of logistic functions with the logistic of the means, which mathematically characterizes logistic functions; and (3) the linearity of the means with respect to sums, as well as products of independent variables. The results are also extended to other classes of transfer functions, including rectified linear functions. Approximation errors tend to cancel each other and do not accumulate. Dropout can also be connected to stochastic neurons and used to predict firing rates, and to backpropagation by viewing the backward propagation as ensemble averaging in a dropout linear network. Moreover, the convergence properties of dropout can be understood in terms of stochastic gradient descent. Finally, for the regularization properties of dropout, the expectation of the dropout gradient is the gradient of the corresponding approximation ensemble, regularized by an adaptive weight decay term with a propensity for self-consistent variance minimization and sparse representations. PMID:24771879

  14. Small area estimation for semicontinuous data.

    PubMed

    Chandra, Hukum; Chambers, Ray

    2016-03-01

    Survey data often contain measurements for variables that are semicontinuous in nature, i.e. they either take a single fixed value (we assume this is zero) or they have a continuous, often skewed, distribution on the positive real line. Standard methods for small area estimation (SAE) based on the use of linear mixed models can be inefficient for such variables. We discuss SAE techniques for semicontinuous variables under a two part random effects model that allows for the presence of excess zeros as well as the skewed nature of the nonzero values of the response variable. In particular, we first model the excess zeros via a generalized linear mixed model fitted to the probability of a nonzero, i.e. strictly positive, value being observed, and then model the response, given that it is strictly positive, using a linear mixed model fitted on the logarithmic scale. Empirical results suggest that the proposed method leads to efficient small area estimates for semicontinuous data of this type. We also propose a parametric bootstrap method to estimate the MSE of the proposed small area estimator. These bootstrap estimates of the MSE are compared to the true MSE in a simulation study. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  15. A Bayesian Semiparametric Latent Variable Model for Mixed Responses

    ERIC Educational Resources Information Center

    Fahrmeir, Ludwig; Raach, Alexander

    2007-01-01

    In this paper we introduce a latent variable model (LVM) for mixed ordinal and continuous responses, where covariate effects on the continuous latent variables are modelled through a flexible semiparametric Gaussian regression model. We extend existing LVMs with the usual linear covariate effects by including nonparametric components for nonlinear…

  16. Two Computer Programs for the Statistical Evaluation of a Weighted Linear Composite.

    ERIC Educational Resources Information Center

    Sands, William A.

    1978-01-01

    Two computer programs (one batch, one interactive) are designed to provide statistics for a weighted linear combination of several component variables. Both programs provide mean, variance, standard deviation, and a validity coefficient. (Author/JKS)

  17. Application of variable-gain output feedback for high-alpha control

    NASA Technical Reports Server (NTRS)

    Ostroff, Aaron J.

    1990-01-01

    A variable-gain, optimal, discrete, output feedback design approach that is applied to a nonlinear flight regime is described. The flight regime covers a wide angle-of-attack range that includes stall and post stall. The paper includes brief descriptions of the variable-gain formulation, the discrete-control structure and flight equations used to apply the design approach, and the high performance airplane model used in the application. Both linear and nonlinear analysis are shown for a longitudinal four-model design case with angles of attack of 5, 15, 35, and 60 deg. Linear and nonlinear simulations are compared for a single-point longitudinal design at 60 deg angle of attack. Nonlinear simulations for the four-model, multi-mode, variable-gain design include a longitudinal pitch-up and pitch-down maneuver and high angle-of-attack regulation during a lateral maneuver.

  18. Hybrid Genetic Agorithms and Line Search Method for Industrial Production Planning with Non-Linear Fitness Function

    NASA Astrophysics Data System (ADS)

    Vasant, Pandian; Barsoum, Nader

    2008-10-01

    Many engineering, science, information technology and management optimization problems can be considered as non linear programming real world problems where the all or some of the parameters and variables involved are uncertain in nature. These can only be quantified using intelligent computational techniques such as evolutionary computation and fuzzy logic. The main objective of this research paper is to solve non linear fuzzy optimization problem where the technological coefficient in the constraints involved are fuzzy numbers which was represented by logistic membership functions by using hybrid evolutionary optimization approach. To explore the applicability of the present study a numerical example is considered to determine the production planning for the decision variables and profit of the company.

  19. Starspots and active regions on IN Com: UBVRI photometry and linear polarization

    NASA Astrophysics Data System (ADS)

    Alekseev, I. Yu.; Kozlova, O. V.

    2014-06-01

    The activity of the variable star IN Com is considered using the latest multicolor UBVRI photometry and linear polarimetric observations carried out during a decade. The photometric variability of the star is fully described using the zonal spottedness model developed at the Crimean Astrophysical Observatory (CrAO). Spotted regions cover up to 22% of the total stellar surface, with the difference in temperatures between the quiet photosphere and the spot umbra being 600 K. The spots are located at middle and low latitudes (40°-55°). The intrinsic broad-band linear polarization of IN Com and its rotational modulation in the U band due to local magnetic fields at the most spotted (active) stellar longitudes were detected for the first time.

  20. Regularity gradient estimates for weak solutions of singular quasi-linear parabolic equations

    NASA Astrophysics Data System (ADS)

    Phan, Tuoc

    2017-12-01

    This paper studies the Sobolev regularity for weak solutions of a class of singular quasi-linear parabolic problems of the form ut -div [ A (x , t , u , ∇u) ] =div [ F ] with homogeneous Dirichlet boundary conditions over bounded spatial domains. Our main focus is on the case that the vector coefficients A are discontinuous and singular in (x , t)-variables, and dependent on the solution u. Global and interior weighted W 1 , p (ΩT , ω)-regularity estimates are established for weak solutions of these equations, where ω is a weight function in some Muckenhoupt class of weights. The results obtained are even new for linear equations, and for ω = 1, because of the singularity of the coefficients in (x , t)-variables.

  1. Single-step fabrication of thin-film linear variable bandpass filters based on metal-insulator-metal geometry.

    PubMed

    Williams, Calum; Rughoobur, Girish; Flewitt, Andrew J; Wilkinson, Timothy D

    2016-11-10

    A single-step fabrication method is presented for ultra-thin, linearly variable optical bandpass filters (LVBFs) based on a metal-insulator-metal arrangement using modified evaporation deposition techniques. This alternate process methodology offers reduced complexity and cost in comparison to conventional techniques for fabricating LVBFs. We are able to achieve linear variation of insulator thickness across a sample, by adjusting the geometrical parameters of a typical physical vapor deposition process. We demonstrate LVBFs with spectral selectivity from 400 to 850 nm based on Ag (25 nm) and MgF2 (75-250 nm). Maximum spectral transmittance is measured at ∼70% with a Q-factor of ∼20.

  2. Linear phase compressive filter

    DOEpatents

    McEwan, Thomas E.

    1995-01-01

    A phase linear filter for soliton suppression is in the form of a laddered series of stages of non-commensurate low pass filters with each low pass filter having a series coupled inductance (L) and a reverse biased, voltage dependent varactor diode, to ground which acts as a variable capacitance (C). L and C values are set to levels which correspond to a linear or conventional phase linear filter. Inductance is mapped directly from that of an equivalent nonlinear transmission line and capacitance is mapped from the linear case using a large signal equivalent of a nonlinear transmission line.

  3. TiO2 dye sensitized solar cell (DSSC): linear relationship of maximum power point and anthocyanin concentration

    NASA Astrophysics Data System (ADS)

    Ahmadian, Radin

    2010-09-01

    This study investigated the relationship of anthocyanin concentration from different organic fruit species and output voltage and current in a TiO2 dye-sensitized solar cell (DSSC) and hypothesized that fruits with greater anthocyanin concentration produce higher maximum power point (MPP) which would lead to higher current and voltage. Anthocyanin dye solution was made with crushing of a group of fresh fruits with different anthocyanin content in 2 mL of de-ionized water and filtration. Using these test fruit dyes, multiple DSSCs were assembled such that light enters through the TiO2 side of the cell. The full current-voltage (I-V) co-variations were measured using a 500 Ω potentiometer as a variable load. Point-by point current and voltage data pairs were measured at various incremental resistance values. The maximum power point (MPP) generated by the solar cell was defined as a dependent variable and the anthocyanin concentration in the fruit used in the DSSC as the independent variable. A regression model was used to investigate the linear relationship between study variables. Regression analysis showed a significant linear relationship between MPP and anthocyanin concentration with a p-value of 0.007. Fruits like blueberry and black raspberry with the highest anthocyanin content generated higher MPP. In a DSSC, a linear model may predict MPP based on the anthocyanin concentration. This model is the first step to find organic anthocyanin sources in the nature with the highest dye concentration to generate energy.

  4. An Entropy-Based Measure of Dependence between Two Groups of Random Variables. Research Report. ETS RR-07-20

    ERIC Educational Resources Information Center

    Kong, Nan

    2007-01-01

    In multivariate statistics, the linear relationship among random variables has been fully explored in the past. This paper looks into the dependence of one group of random variables on another group of random variables using (conditional) entropy. A new measure, called the K-dependence coefficient or dependence coefficient, is defined using…

  5. Multiple Use One-Sided Hypotheses Testing in Univariate Linear Calibration

    NASA Technical Reports Server (NTRS)

    Krishnamoorthy, K.; Kulkarni, Pandurang M.; Mathew, Thomas

    1996-01-01

    Consider a normally distributed response variable, related to an explanatory variable through the simple linear regression model. Data obtained on the response variable, corresponding to known values of the explanatory variable (i.e., calibration data), are to be used for testing hypotheses concerning unknown values of the explanatory variable. We consider the problem of testing an unlimited sequence of one sided hypotheses concerning the explanatory variable, using the corresponding sequence of values of the response variable and the same set of calibration data. This is the situation of multiple use of the calibration data. The tests derived in this context are characterized by two types of uncertainties: one uncertainty associated with the sequence of values of the response variable, and a second uncertainty associated with the calibration data. We derive tests based on a condition that incorporates both of these uncertainties. The solution has practical applications in the decision limit problem. We illustrate our results using an example dealing with the estimation of blood alcohol concentration based on breath estimates of the alcohol concentration. In the example, the problem is to test if the unknown blood alcohol concentration of an individual exceeds a threshold that is safe for driving.

  6. A non-linear data mining parameter selection algorithm for continuous variables

    PubMed Central

    Razavi, Marianne; Brady, Sean

    2017-01-01

    In this article, we propose a new data mining algorithm, by which one can both capture the non-linearity in data and also find the best subset model. To produce an enhanced subset of the original variables, a preferred selection method should have the potential of adding a supplementary level of regression analysis that would capture complex relationships in the data via mathematical transformation of the predictors and exploration of synergistic effects of combined variables. The method that we present here has the potential to produce an optimal subset of variables, rendering the overall process of model selection more efficient. This algorithm introduces interpretable parameters by transforming the original inputs and also a faithful fit to the data. The core objective of this paper is to introduce a new estimation technique for the classical least square regression framework. This new automatic variable transformation and model selection method could offer an optimal and stable model that minimizes the mean square error and variability, while combining all possible subset selection methodology with the inclusion variable transformations and interactions. Moreover, this method controls multicollinearity, leading to an optimal set of explanatory variables. PMID:29131829

  7. Correlation and simple linear regression.

    PubMed

    Eberly, Lynn E

    2007-01-01

    This chapter highlights important steps in using correlation and simple linear regression to address scientific questions about the association of two continuous variables with each other. These steps include estimation and inference, assessing model fit, the connection between regression and ANOVA, and study design. Examples in microbiology are used throughout. This chapter provides a framework that is helpful in understanding more complex statistical techniques, such as multiple linear regression, linear mixed effects models, logistic regression, and proportional hazards regression.

  8. Modeling Effects of Temperature, Soil, Moisture, Nutrition and Variety As Determinants of Severity of Pythium Damping-Off and Root Disease in Subterranean Clover

    PubMed Central

    You, Ming P.; Rensing, Kelly; Renton, Michael; Barbetti, Martin J.

    2017-01-01

    Subterranean clover (Trifolium subterraneum) is a critical pasture legume in Mediterranean regions of southern Australia and elsewhere, including Mediterranean-type climatic regions in Africa, Asia, Australia, Europe, North America, and South America. Pythium damping-off and root disease caused by Pythium irregulare is a significant threat to subterranean clover in Australia and a study was conducted to define how environmental factors (viz. temperature, soil type, moisture and nutrition) as well as variety, influence the extent of damping-off and root disease as well as subterranean clover productivity under challenge by this pathogen. Relationships were statistically modeled using linear and generalized linear models and boosted regression trees. Modeling found complex relationships between explanatory variables and the extent of Pythium damping-off and root rot. Linear modeling identified high-level (4 or 5-way) significant interactions for each dependent variable (dry shoot and root weight, emergence, tap and lateral root disease index). Furthermore, all explanatory variables (temperature, soil, moisture, nutrition, variety) were found significant as part of some interaction within these models. A significant five-way interaction between all explanatory variables was found for both dry shoot and root dry weights, and a four way interaction between temperature, soil, moisture, and nutrition was found for both tap and lateral root disease index. A second approach to modeling using boosted regression trees provided support for and helped clarify the complex nature of the relationships found in linear models. All explanatory variables showed at least 5% relative influence on each of the five dependent variables. All models indicated differences due to soil type, with the sand-based soil having either higher weights, greater emergence, or lower disease indices; while lowest weights and less emergence, as well as higher disease indices, were found for loam soil and low temperature. There was more severe tap and lateral root rot disease in higher moisture situations. PMID:29184544

  9. The Routine Fitting of Kinetic Data to Models

    PubMed Central

    Berman, Mones; Shahn, Ezra; Weiss, Marjory F.

    1962-01-01

    A mathematical formalism is presented for use with digital computers to permit the routine fitting of data to physical and mathematical models. Given a set of data, the mathematical equations describing a model, initial conditions for an experiment, and initial estimates for the values of model parameters, the computer program automatically proceeds to obtain a least squares fit of the data by an iterative adjustment of the values of the parameters. When the experimental measures are linear combinations of functions, the linear coefficients for a least squares fit may also be calculated. The values of both the parameters of the model and the coefficients for the sum of functions may be unknown independent variables, unknown dependent variables, or known constants. In the case of dependence, only linear dependencies are provided for in routine use. The computer program includes a number of subroutines, each one of which performs a special task. This permits flexibility in choosing various types of solutions and procedures. One subroutine, for example, handles linear differential equations, another, special non-linear functions, etc. The use of analytic or numerical solutions of equations is possible. PMID:13867975

  10. Non-linear scaling of a musculoskeletal model of the lower limb using statistical shape models.

    PubMed

    Nolte, Daniel; Tsang, Chui Kit; Zhang, Kai Yu; Ding, Ziyun; Kedgley, Angela E; Bull, Anthony M J

    2016-10-03

    Accurate muscle geometry for musculoskeletal models is important to enable accurate subject-specific simulations. Commonly, linear scaling is used to obtain individualised muscle geometry. More advanced methods include non-linear scaling using segmented bone surfaces and manual or semi-automatic digitisation of muscle paths from medical images. In this study, a new scaling method combining non-linear scaling with reconstructions of bone surfaces using statistical shape modelling is presented. Statistical Shape Models (SSMs) of femur and tibia/fibula were used to reconstruct bone surfaces of nine subjects. Reference models were created by morphing manually digitised muscle paths to mean shapes of the SSMs using non-linear transformations and inter-subject variability was calculated. Subject-specific models of muscle attachment and via points were created from three reference models. The accuracy was evaluated by calculating the differences between the scaled and manually digitised models. The points defining the muscle paths showed large inter-subject variability at the thigh and shank - up to 26mm; this was found to limit the accuracy of all studied scaling methods. Errors for the subject-specific muscle point reconstructions of the thigh could be decreased by 9% to 20% by using the non-linear scaling compared to a typical linear scaling method. We conclude that the proposed non-linear scaling method is more accurate than linear scaling methods. Thus, when combined with the ability to reconstruct bone surfaces from incomplete or scattered geometry data using statistical shape models our proposed method is an alternative to linear scaling methods. Copyright © 2016 The Author. Published by Elsevier Ltd.. All rights reserved.

  11. User's manual for interactive LINEAR: A FORTRAN program to derive linear aircraft models

    NASA Technical Reports Server (NTRS)

    Antoniewicz, Robert F.; Duke, Eugene L.; Patterson, Brian P.

    1988-01-01

    An interactive FORTRAN program that provides the user with a powerful and flexible tool for the linearization of aircraft aerodynamic models is documented in this report. The program LINEAR numerically determines a linear system model using nonlinear equations of motion and a user-supplied linear or nonlinear aerodynamic model. The nonlinear equations of motion used are six-degree-of-freedom equations with stationary atmosphere and flat, nonrotating earth assumptions. The system model determined by LINEAR consists of matrices for both the state and observation equations. The program has been designed to allow easy selection and definition of the state, control, and observation variables to be used in a particular model.

  12. Application of variable teeth pitch face mill as chatter suppression method for non-rigid technological system

    NASA Astrophysics Data System (ADS)

    Svinin, V. M.; Savilov, A. V.

    2018-03-01

    The article describes the results of experimental studies on the effects of variation type for variable teeth pitches on low-rigidity workpiece chatter suppression efficiency in a feed direction and in a direction of the normal to the machined surface. Mill operation performance was identified by comparing the amplitudes of dominant chatter harmonics using constant and variable teeth pitches. The following variable pitch formation variants were studied: alternative, linear rising, and linear rising falling. The angle difference of adjacent teeth pitches ranged from 0 to 10°, from 5 to 8° and from 5 to 10° with interval of 1°. The experiments showed that for all variants, machining dynamics performance resulted from the difference of adjacent pitches corresponding to a half the chatter wavelength along the cutting surface. The alternative nature of a variable teeth pitch is most efficient as it almost completely suppresses the chatters. Theoretical explanations of the results are presented

  13. Relationship of red and photographic infrared spectral radiances to alfalfa biomass, forage water content, percentage canopy cover, and severity of drought stress

    NASA Technical Reports Server (NTRS)

    Tucker, C. J.; Elgin, J. H., Jr.; Mcmurtrey, J. E., III

    1979-01-01

    Red and photographic infrared spectral data were collected using a handheld radiometer for two cuttings of alfalfa. Significant linear and non-linear correlation coefficients were found between the spectral variables and plant height, biomass, forage water content, and estimated canopy cover for the earlier alfalfa cutting. The alfalfa of later cutting experienced a period of severe drought stress which limited growth. The spectral variables were found to be highly correlated with the estimated drought scores for this alfalfa cutting.

  14. The Stability and Interfacial Motion of Multi-layer Radial Porous Media and Hele-Shaw Flows

    NASA Astrophysics Data System (ADS)

    Gin, Craig; Daripa, Prabir

    2017-11-01

    In this talk, we will discuss viscous fingering instabilities of multi-layer immiscible porous media flows within the Hele-Shaw model in a radial flow geometry. We study the motion of the interfaces for flows with both constant and variable viscosity fluids. We consider the effects of using a variable injection rate on multi-layer flows. We also present a numerical approach to simulating the interface motion within linear theory using the method of eigenfunction expansion. We compare these results with fully non-linear simulations.

  15. The measurement of the earth's radiation budget as a problem in information theory - A tool for the rational design of earth observing systems

    NASA Technical Reports Server (NTRS)

    Barkstrom, B. R.

    1983-01-01

    The measurement of the earth's radiation budget has been chosen to illustrate the technique of objective system design. The measurement process is an approximately linear transformation of the original field of radiant exitances, so that linear statistical techniques may be employed. The combination of variability, measurement strategy, and error propagation is presently made with the help of information theory, as suggested by Kondratyev et al. (1975) and Peckham (1974). Covariance matrices furnish the quantitative statement of field variability.

  16. Linear Least Squares for Correlated Data

    NASA Technical Reports Server (NTRS)

    Dean, Edwin B.

    1988-01-01

    Throughout the literature authors have consistently discussed the suspicion that regression results were less than satisfactory when the independent variables were correlated. Camm, Gulledge, and Womer, and Womer and Marcotte provide excellent applied examples of these concerns. Many authors have obtained partial solutions for this problem as discussed by Womer and Marcotte and Wonnacott and Wonnacott, which result in generalized least squares algorithms to solve restrictive cases. This paper presents a simple but relatively general multivariate method for obtaining linear least squares coefficients which are free of the statistical distortion created by correlated independent variables.

  17. Design of Linear Quadratic Regulators and Kalman Filters

    NASA Technical Reports Server (NTRS)

    Lehtinen, B.; Geyser, L.

    1986-01-01

    AESOP solves problems associated with design of controls and state estimators for linear time-invariant systems. Systems considered are modeled in state-variable form by set of linear differential and algebraic equations with constant coefficients. Two key problems solved by AESOP are linear quadratic regulator (LQR) design problem and steady-state Kalman filter design problem. AESOP is interactive. User solves design problems and analyzes solutions in single interactive session. Both numerical and graphical information available to user during the session.

  18. Nonlinear time series modeling and forecasting the seismic data of the Hindu Kush region

    NASA Astrophysics Data System (ADS)

    Khan, Muhammad Yousaf; Mittnik, Stefan

    2018-01-01

    In this study, we extended the application of linear and nonlinear time models in the field of earthquake seismology and examined the out-of-sample forecast accuracy of linear Autoregressive (AR), Autoregressive Conditional Duration (ACD), Self-Exciting Threshold Autoregressive (SETAR), Threshold Autoregressive (TAR), Logistic Smooth Transition Autoregressive (LSTAR), Additive Autoregressive (AAR), and Artificial Neural Network (ANN) models for seismic data of the Hindu Kush region. We also extended the previous studies by using Vector Autoregressive (VAR) and Threshold Vector Autoregressive (TVAR) models and compared their forecasting accuracy with linear AR model. Unlike previous studies that typically consider the threshold model specifications by using internal threshold variable, we specified these models with external transition variables and compared their out-of-sample forecasting performance with the linear benchmark AR model. The modeling results show that time series models used in the present study are capable of capturing the dynamic structure present in the seismic data. The point forecast results indicate that the AR model generally outperforms the nonlinear models. However, in some cases, threshold models with external threshold variables specification produce more accurate forecasts, indicating that specification of threshold time series models is of crucial importance. For raw seismic data, the ACD model does not show an improved out-of-sample forecasting performance over the linear AR model. The results indicate that the AR model is the best forecasting device to model and forecast the raw seismic data of the Hindu Kush region.

  19. Variables Affecting Proficiency in English as a Second Language

    ERIC Educational Resources Information Center

    Santana, Josefina C.; García-Santillán, Arturo; Escalera-Chávez, Milka Elena

    2017-01-01

    This study explores different variables leading to proficiency in English as a second language. Level of English on a placement exam taken upon entering a private university in Mexico was correlated to several variables. Additionally, participants (N = 218) were asked their perception of their own proficiency. A linear regression and a one-factor…

  20. Regression Methods for Categorical Dependent Variables: Effects on a Model of Student College Choice

    ERIC Educational Resources Information Center

    Rapp, Kelly E.

    2012-01-01

    The use of categorical dependent variables with the classical linear regression model (CLRM) violates many of the model's assumptions and may result in biased estimates (Long, 1997; O'Connell, Goldstein, Rogers, & Peng, 2008). Many dependent variables of interest to educational researchers (e.g., professorial rank, educational attainment) are…

  1. Least Principal Components Analysis (LPCA): An Alternative to Regression Analysis.

    ERIC Educational Resources Information Center

    Olson, Jeffery E.

    Often, all of the variables in a model are latent, random, or subject to measurement error, or there is not an obvious dependent variable. When any of these conditions exist, an appropriate method for estimating the linear relationships among the variables is Least Principal Components Analysis. Least Principal Components are robust, consistent,…

  2. Daily commuting to work is not associated with variables of health.

    PubMed

    Mauss, Daniel; Jarczok, Marc N; Fischer, Joachim E

    2016-01-01

    Commuting to work is thought to have a negative impact on employee health. We tested the association of work commute and different variables of health in German industrial employees. Self-rated variables of an industrial cohort (n = 3805; 78.9 % male) including absenteeism, presenteeism and indices reflecting stress and well-being were assessed by a questionnaire. Fasting blood samples, heart-rate variability and anthropometric data were collected. Commuting was grouped into one of four categories: 0-19.9, 20-44.9, 45-59.9, ≥60 min travelling one way to work. Bivariate associations between commuting and all variables under study were calculated. Linear regression models tested this association further, controlling for potential confounders. Commuting was positively correlated with waist circumference and inversely with triglycerides. These associations did not remain statistically significant in linear regression models controlling for age, gender, marital status, and shiftwork. No other association with variables of physical, psychological, or mental health and well-being could be found. The results indicate that commuting to work has no significant impact on well-being and health of German industrial employees.

  3. Development and validation of a general purpose linearization program for rigid aircraft models

    NASA Technical Reports Server (NTRS)

    Duke, E. L.; Antoniewicz, R. F.

    1985-01-01

    A FORTRAN program that provides the user with a powerful and flexible tool for the linearization of aircraft models is discussed. The program LINEAR numerically determines a linear systems model using nonlinear equations of motion and a user-supplied, nonlinear aerodynamic model. The system model determined by LINEAR consists of matrices for both the state and observation equations. The program has been designed to allow easy selection and definition of the state, control, and observation variables to be used in a particular model. Also, included in the report is a comparison of linear and nonlinear models for a high performance aircraft.

  4. A comparison of two multi-variable integrator windup protection schemes

    NASA Technical Reports Server (NTRS)

    Mattern, Duane

    1993-01-01

    Two methods are examined for limit and integrator wind-up protection for multi-input, multi-output linear controllers subject to actuator constraints. The methods begin with an existing linear controller that satisfies the specifications for the nominal, small perturbation, linear model of the plant. The controllers are formulated to include an additional contribution to the state derivative calculations. The first method to be examined is the multi-variable version of the single-input, single-output, high gain, Conventional Anti-Windup (CAW) scheme. Except for the actuator limits, the CAW scheme is linear. The second scheme to be examined, denoted the Modified Anti-Windup (MAW) scheme, uses a scalar to modify the magnitude of the controller output vector while maintaining the vector direction. The calculation of the scalar modifier is a nonlinear function of the controller outputs and the actuator limits. In both cases the constrained actuator is tracked. These two integrator windup protection methods are demonstrated on a turbofan engine control system with five measurements, four control variables, and four actuators. The closed-loop responses of the two schemes are compared and contrasted during limit operation. The issue of maintaining the direction of the controller output vector using the Modified Anti-Windup scheme is discussed and the advantages and disadvantages of both of the IWP methods are presented.

  5. Reducing bias and analyzing variability in the time-left procedure.

    PubMed

    Trujano, R Emmanuel; Orduña, Vladimir

    2015-04-01

    The time-left procedure was designed to evaluate the psychophysical function for time. Although previous results indicated a linear relationship, it is not clear what role the observed bias toward the time-left option plays in this procedure and there are no reports of how variability changes with predicted indifference. The purposes of this experiment were to reduce bias experimentally, and to contrast the difference limen (a measure of variability around indifference) with predictions from scalar expectancy theory (linear timing) and behavioral economic model (logarithmic timing). A control group of 6 rats performed the original time-left procedure with C=60 s and S=5, 10,…, 50, 55 s, whereas a no-bias group of 6 rats performed the same conditions in a modified time-left procedure in which only a single response per choice trial was allowed. Results showed that bias was reduced for the no-bias group, observed indifference grew linearly with predicted indifference for both groups, and difference limen and Weber ratios decreased as expected indifference increased for the control group, which is consistent with linear timing, whereas for the no-bias group they remained constant, consistent with logarithmic timing. Therefore, the time-left procedure generates results consistent with logarithmic perceived time once bias is experimentally reduced. Copyright © 2015 Elsevier B.V. All rights reserved.

  6. Approximate reduction of linear population models governed by stochastic differential equations: application to multiregional models.

    PubMed

    Sanz, Luis; Alonso, Juan Antonio

    2017-12-01

    In this work we develop approximate aggregation techniques in the context of slow-fast linear population models governed by stochastic differential equations and apply the results to the treatment of populations with spatial heterogeneity. Approximate aggregation techniques allow one to transform a complex system involving many coupled variables and in which there are processes with different time scales, by a simpler reduced model with a fewer number of 'global' variables, in such a way that the dynamics of the former can be approximated by that of the latter. In our model we contemplate a linear fast deterministic process together with a linear slow process in which the parameters are affected by additive noise, and give conditions for the solutions corresponding to positive initial conditions to remain positive for all times. By letting the fast process reach equilibrium we build a reduced system with a lesser number of variables, and provide results relating the asymptotic behaviour of the first- and second-order moments of the population vector for the original and the reduced system. The general technique is illustrated by analysing a multiregional stochastic system in which dispersal is deterministic and the rate growth of the populations in each patch is affected by additive noise.

  7. Finite Element Based Structural Damage Detection Using Artificial Boundary Conditions

    DTIC Science & Technology

    2007-09-01

    C. (2005). Elementary Linear Algebra . New York: John Wiley and Sons. Avitable, Peter (2001, January) Experimental Modal Analysis, A Simple Non...variables under consideration. 3 Frequency sensitivities are the basis for a linear approximation to compute the change in the natural frequencies of a...THEORY The general problem statement for a non- linear constrained optimization problem is: To minimize ( )f x Objective Function Subject to

  8. Comparing Multiple-Group Multinomial Log-Linear Models for Multidimensional Skill Distributions in the General Diagnostic Model. Research Report. ETS RR-08-35

    ERIC Educational Resources Information Center

    Xu, Xueli; von Davier, Matthias

    2008-01-01

    The general diagnostic model (GDM) utilizes located latent classes for modeling a multidimensional proficiency variable. In this paper, the GDM is extended by employing a log-linear model for multiple populations that assumes constraints on parameters across multiple groups. This constrained model is compared to log-linear models that assume…

  9. Empirical Models for the Shielding and Reflection of Jet Mixing Noise by a Surface

    NASA Technical Reports Server (NTRS)

    Brown, Cliff

    2015-01-01

    Empirical models for the shielding and refection of jet mixing noise by a nearby surface are described and the resulting models evaluated. The flow variables are used to non-dimensionalize the surface position variables, reducing the variable space and producing models that are linear function of non-dimensional surface position and logarithmic in Strouhal frequency. A separate set of coefficients are determined at each observer angle in the dataset and linear interpolation is used to for the intermediate observer angles. The shielding and rejection models are then combined with existing empirical models for the jet mixing and jet-surface interaction noise sources to produce predicted spectra for a jet operating near a surface. These predictions are then evaluated against experimental data.

  10. Empirical Models for the Shielding and Reflection of Jet Mixing Noise by a Surface

    NASA Technical Reports Server (NTRS)

    Brown, Clifford A.

    2016-01-01

    Empirical models for the shielding and reflection of jet mixing noise by a nearby surface are described and the resulting models evaluated. The flow variables are used to non-dimensionalize the surface position variables, reducing the variable space and producing models that are linear function of non-dimensional surface position and logarithmic in Strouhal frequency. A separate set of coefficients are determined at each observer angle in the dataset and linear interpolation is used to for the intermediate observer angles. The shielding and reflection models are then combined with existing empirical models for the jet mixing and jet-surface interaction noise sources to produce predicted spectra for a jet operating near a surface. These predictions are then evaluated against experimental data.

  11. A SEMIPARAMETRIC BAYESIAN MODEL FOR CIRCULAR-LINEAR REGRESSION

    EPA Science Inventory

    We present a Bayesian approach to regress a circular variable on a linear predictor. The regression coefficients are assumed to have a nonparametric distribution with a Dirichlet process prior. The semiparametric Bayesian approach gives added flexibility to the model and is usefu...

  12. Multiple regression for physiological data analysis: the problem of multicollinearity.

    PubMed

    Slinker, B K; Glantz, S A

    1985-07-01

    Multiple linear regression, in which several predictor variables are related to a response variable, is a powerful statistical tool for gaining quantitative insight into complex in vivo physiological systems. For these insights to be correct, all predictor variables must be uncorrelated. However, in many physiological experiments the predictor variables cannot be precisely controlled and thus change in parallel (i.e., they are highly correlated). There is a redundancy of information about the response, a situation called multicollinearity, that leads to numerical problems in estimating the parameters in regression equations; the parameters are often of incorrect magnitude or sign or have large standard errors. Although multicollinearity can be avoided with good experimental design, not all interesting physiological questions can be studied without encountering multicollinearity. In these cases various ad hoc procedures have been proposed to mitigate multicollinearity. Although many of these procedures are controversial, they can be helpful in applying multiple linear regression to some physiological problems.

  13. Linear data mining the Wichita clinical matrix suggests sleep and allostatic load involvement in chronic fatigue syndrome.

    PubMed

    Gurbaxani, Brian M; Jones, James F; Goertzel, Benjamin N; Maloney, Elizabeth M

    2006-04-01

    To provide a mathematical introduction to the Wichita (KS, USA) clinical dataset, which is all of the nongenetic data (no microarray or single nucleotide polymorphism data) from the 2-day clinical evaluation, and show the preliminary findings and limitations, of popular, matrix algebra-based data mining techniques. An initial matrix of 440 variables by 227 human subjects was reduced to 183 variables by 164 subjects. Variables were excluded that strongly correlated with chronic fatigue syndrome (CFS) case classification by design (for example, the multidimensional fatigue inventory [MFI] data), that were otherwise self reporting in nature and also tended to correlate strongly with CFS classification, or were sparse or nonvarying between case and control. Subjects were excluded if they did not clearly fall into well-defined CFS classifications, had comorbid depression with melancholic features, or other medical or psychiatric exclusions. The popular data mining techniques, principle components analysis (PCA) and linear discriminant analysis (LDA), were used to determine how well the data separated into groups. Two different feature selection methods helped identify the most discriminating parameters. Although purely biological features (variables) were found to separate CFS cases from controls, including many allostatic load and sleep-related variables, most parameters were not statistically significant individually. However, biological correlates of CFS, such as heart rate and heart rate variability, require further investigation. Feature selection of a limited number of variables from the purely biological dataset produced better separation between groups than a PCA of the entire dataset. Feature selection highlighted the importance of many of the allostatic load variables studied in more detail by Maloney and colleagues in this issue [1] , as well as some sleep-related variables. Nonetheless, matrix linear algebra-based data mining approaches appeared to be of limited utility when compared with more sophisticated nonlinear analyses on richer data types, such as those found in Maloney and colleagues [1] and Goertzel and colleagues [2] in this issue.

  14. Nonlinear Stress/Strain Behavior of a Synthetic Porous Medium at Seismic Frequencies

    NASA Astrophysics Data System (ADS)

    Roberts, P. M.; Ibrahim, R. H.

    2008-12-01

    Laboratory experiments on porous core samples have shown that seismic-band (100 Hz or less) mechanical, axial stress/strain cycling of the porous matrix can influence the transport behavior of fluids and suspended particles during steady-state fluid flow through the cores. In conjunction with these stimulated transport experiments, measurements of the applied dynamic axial stress/strain were made to investigate the nonlinear mechanical response of porous media for a poorly explored range of frequencies from 1 to 40 Hz. A unique core-holder apparatus that applies low-frequency mechanical stress/strain to 2.54-cm-diameter porous samples during constant-rate fluid flow was used for these experiments. Applied stress was measured with a load cell in series with the source and porous sample, and the resulting strain was measured with an LVDT attached to the core face. A synthetic porous system consisting of packed 1-mm-diameter glass beads was used to investigate both stress/strain and stimulated mass-transport behavior under idealized conditions. The bead pack was placed in a rubber sleeve and static confining stresses of 2.4 MPa radial and 1.7 MPa axial were applied to the sample. Sinusoidal stress oscillations were applied to the sample at 1 to 40 Hz over a range of RMS stress amplitude from 37 to 275 kPa. Dynamic stress/strain was measured before and after the core was saturated with deionized water. The slope of the linear portion of each stress/strain hysteresis loop was used to estimate Young's modulus as a function of frequency and amplitude for both the dry and wet sample. The modulus was observed to increase after the dry sample was saturated. For both dry and wet cases, the modulus decreased with increasing dynamic RMS stress amplitude at a constant frequency of 23 Hz. At constant RMS stress amplitude, the modulus increased with increasing frequency for the wet sample but remained constant for the dry sample. The observed nonlinear behavior of Young's modulus and the dependence of stress/strain hysteresis on strain amplitude and frequency have implications on how seismic waves can influence the mechanical properties of granular porous materials in the Earth. This work was funded by the U.S. Department of Energy Basic Energy Sciences Program under the Los Alamos National Laboratory contract no. DE-AC52-06NA25396.

  15. Measurement station for interim inspections of Lightbridge metallic fuel rods at the Halden Boiling Water Reactor

    NASA Astrophysics Data System (ADS)

    Hartmann, C.; Totemeier, A.; Holcombe, S.; Liverud, J.; Limi, M.; Hansen, J. E.; Navestad, E. AB(; )

    2018-01-01

    Lightbridge Corporation has developed a new Uranium-Zirconium based metallic fuel. The fuel rods aremanufactured via a co-extrusion process, and are characterized by their multi-lobed (cruciform-shaped) cross section. The fuel rods are also helically-twisted in the axial direction. Two experimental fuel assemblies, each containing four Lightbridge fuel rods, are scheduled to be irradiated in the Halden Boiling Water Reactor (HBWR) starting in 2018. In addition to on-line monitoring of fuel rod elongation and critical assembly conditions (e.g. power, flow rates, coolant temperatures, etc.) during the irradiation, several key parameters of the fuel will be measured out-of-core during interim inspections. An inspection measurement station for use in the irradiated fuel handling compartment at the HBWR has therefore been developed for this purpose. The multi-lobed cladding cross section combined with the spiral shape of the Lightbridge metallic fuel rods requires a high-precision guiding system to ensure good position repeatability combined with low-friction guiding. The measurement station is equipped with a combination of instruments and equipment supplied from third-party vendors and instruments and equipment developed at Institute for Energy Technology (IFE). Two sets of floating linear voltage differential transformer (LVDT) pairs are used to measure swelling and diameter changes between the lobes and the valleys over the length of the fuel rods. Eddy current probes are used to measure the thickness of oxide layers in the valleys and on the lobe tips and also to detect possible surface cracks/pores. The measurement station also accommodates gamma scans. Additionally, an eddy-current probe has been developed at IFE specifically to detect potential gaps or discontinuities in the bonding layer between the metallic fuel and the Zirconium alloy cladding. Potential gaps in the bonding layer will be hidden behind a 0.5-1.0 mm thick cladding wall. It has therefore been necessary to perform a careful design study of the probe geometry. For this, finite element analysis (FEA) has been performed in combination with practical validation tests on representative fuel dummies with machined flaws to find the probe geometry that best detects a hidden flaw. Tests performed thus far show that gaps down to 25 μm thickness can be detected with good repeatability and good discrimination from lift-off signals.

  16. Projective-Dual Method for Solving Systems of Linear Equations with Nonnegative Variables

    NASA Astrophysics Data System (ADS)

    Ganin, B. V.; Golikov, A. I.; Evtushenko, Yu. G.

    2018-02-01

    In order to solve an underdetermined system of linear equations with nonnegative variables, the projection of a given point onto its solutions set is sought. The dual of this problem—the problem of unconstrained maximization of a piecewise-quadratic function—is solved by Newton's method. The problem of unconstrained optimization dual of the regularized problem of finding the projection onto the solution set of the system is considered. A connection of duality theory and Newton's method with some known algorithms of projecting onto a standard simplex is shown. On the example of taking into account the specifics of the constraints of the transport linear programming problem, the possibility to increase the efficiency of calculating the generalized Hessian matrix is demonstrated. Some examples of numerical calculations using MATLAB are presented.

  17. Linear theory for filtering nonlinear multiscale systems with model error

    PubMed Central

    Berry, Tyrus; Harlim, John

    2014-01-01

    In this paper, we study filtering of multiscale dynamical systems with model error arising from limitations in resolving the smaller scale processes. In particular, the analysis assumes the availability of continuous-time noisy observations of all components of the slow variables. Mathematically, this paper presents new results on higher order asymptotic expansion of the first two moments of a conditional measure. In particular, we are interested in the application of filtering multiscale problems in which the conditional distribution is defined over the slow variables, given noisy observation of the slow variables alone. From the mathematical analysis, we learn that for a continuous time linear model with Gaussian noise, there exists a unique choice of parameters in a linear reduced model for the slow variables which gives the optimal filtering when only the slow variables are observed. Moreover, these parameters simultaneously give the optimal equilibrium statistical estimates of the underlying system, and as a consequence they can be estimated offline from the equilibrium statistics of the true signal. By examining a nonlinear test model, we show that the linear theory extends in this non-Gaussian, nonlinear configuration as long as we know the optimal stochastic parametrization and the correct observation model. However, when the stochastic parametrization model is inappropriate, parameters chosen for good filter performance may give poor equilibrium statistical estimates and vice versa; this finding is based on analytical and numerical results on our nonlinear test model and the two-layer Lorenz-96 model. Finally, even when the correct stochastic ansatz is given, it is imperative to estimate the parameters simultaneously and to account for the nonlinear feedback of the stochastic parameters into the reduced filter estimates. In numerical experiments on the two-layer Lorenz-96 model, we find that the parameters estimated online, as part of a filtering procedure, simultaneously produce accurate filtering and equilibrium statistical prediction. In contrast, an offline estimation technique based on a linear regression, which fits the parameters to a training dataset without using the filter, yields filter estimates which are worse than the observations or even divergent when the slow variables are not fully observed. This finding does not imply that all offline methods are inherently inferior to the online method for nonlinear estimation problems, it only suggests that an ideal estimation technique should estimate all parameters simultaneously whether it is online or offline. PMID:25002829

  18. Are your covariates under control? How normalization can re-introduce covariate effects.

    PubMed

    Pain, Oliver; Dudbridge, Frank; Ronald, Angelica

    2018-04-30

    Many statistical tests rely on the assumption that the residuals of a model are normally distributed. Rank-based inverse normal transformation (INT) of the dependent variable is one of the most popular approaches to satisfy the normality assumption. When covariates are included in the analysis, a common approach is to first adjust for the covariates and then normalize the residuals. This study investigated the effect of regressing covariates against the dependent variable and then applying rank-based INT to the residuals. The correlation between the dependent variable and covariates at each stage of processing was assessed. An alternative approach was tested in which rank-based INT was applied to the dependent variable before regressing covariates. Analyses based on both simulated and real data examples demonstrated that applying rank-based INT to the dependent variable residuals after regressing out covariates re-introduces a linear correlation between the dependent variable and covariates, increasing type-I errors and reducing power. On the other hand, when rank-based INT was applied prior to controlling for covariate effects, residuals were normally distributed and linearly uncorrelated with covariates. This latter approach is therefore recommended in situations were normality of the dependent variable is required.

  19. Linear ordinary differential equations with constant coefficients. Revisiting the impulsive response method using factorization

    NASA Astrophysics Data System (ADS)

    Camporesi, Roberto

    2011-06-01

    We present an approach to the impulsive response method for solving linear constant-coefficient ordinary differential equations based on the factorization of the differential operator. The approach is elementary, we only assume a basic knowledge of calculus and linear algebra. In particular, we avoid the use of distribution theory, as well as of the other more advanced approaches: Laplace transform, linear systems, the general theory of linear equations with variable coefficients and the variation of constants method. The approach presented here can be used in a first course on differential equations for science and engineering majors.

  20. Linear phase compressive filter

    DOEpatents

    McEwan, T.E.

    1995-06-06

    A phase linear filter for soliton suppression is in the form of a laddered series of stages of non-commensurate low pass filters with each low pass filter having a series coupled inductance (L) and a reverse biased, voltage dependent varactor diode, to ground which acts as a variable capacitance (C). L and C values are set to levels which correspond to a linear or conventional phase linear filter. Inductance is mapped directly from that of an equivalent nonlinear transmission line and capacitance is mapped from the linear case using a large signal equivalent of a nonlinear transmission line. 2 figs.

  1. SU-G-TeP3-01: A New Approach for Calculating Variable Relative Biological Effectiveness in IMPT Optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cao, W; Randeniya, K; Grosshans, D

    2016-06-15

    Purpose: To investigate the impact of a new approach for calculating relative biological effectiveness (RBE) in intensity-modulated proton therapy (IMPT) optimization on RBE-weighted dose distributions. This approach includes the nonlinear RBE for the high linear energy transfer (LET) region, which was revealed by recent experiments at our institution. In addition, this approach utilizes RBE data as a function of LET without using dose-averaged LET in calculating RBE values. Methods: We used a two-piece function for calculating RBE from LET. Within the Bragg peak, RBE is linearly correlated to LET. Beyond the Bragg peak, we use a nonlinear (quadratic) RBE functionmore » of LET based on our experimental. The IMPT optimization was devised to incorporate variable RBE by maximizing biological effect (based on the Linear Quadratic model) in tumor and minimizing biological effect in normal tissues. Three glioblastoma patients were retrospectively selected from our institution in this study. For each patient, three optimized IMPT plans were created based on three RBE resolutions, i.e., fixed RBE of 1.1 (RBE-1.1), variable RBE based on linear RBE and LET relationship (RBE-L), and variable RBE based on linear and quadratic relationship (RBE-LQ). The RBE weighted dose distributions of each optimized plan were evaluated in terms of different RBE values, i.e., RBE-1.1, RBE-L and RBE-LQ. Results: The RBE weighted doses recalculated from RBE-1.1 based optimized plans demonstrated an increasing pattern from using RBE-1.1, RBE-L to RBE-LQ consistently for all three patients. The variable RBE (RBE-L and RBE-LQ) weighted dose distributions recalculated from RBE-L and RBE-LQ based optimization were more homogenous within the targets and better spared in the critical structures than the ones recalculated from RBE-1.1 based optimization. Conclusion: We implemented a new approach for RBE calculation and optimization and demonstrated potential benefits of improving tumor coverage and normal sparing in IMPT planning.« less

  2. Robust best linear estimator for Cox regression with instrumental variables in whole cohort and surrogates with additive measurement error in calibration sample

    PubMed Central

    Wang, Ching-Yun; Song, Xiao

    2017-01-01

    SUMMARY Biomedical researchers are often interested in estimating the effect of an environmental exposure in relation to a chronic disease endpoint. However, the exposure variable of interest may be measured with errors. In a subset of the whole cohort, a surrogate variable is available for the true unobserved exposure variable. The surrogate variable satisfies an additive measurement error model, but it may not have repeated measurements. The subset in which the surrogate variables are available is called a calibration sample. In addition to the surrogate variables that are available among the subjects in the calibration sample, we consider the situation when there is an instrumental variable available for all study subjects. An instrumental variable is correlated with the unobserved true exposure variable, and hence can be useful in the estimation of the regression coefficients. In this paper, we propose a nonparametric method for Cox regression using the observed data from the whole cohort. The nonparametric estimator is the best linear combination of a nonparametric correction estimator from the calibration sample and the difference of the naive estimators from the calibration sample and the whole cohort. The asymptotic distribution is derived, and the finite sample performance of the proposed estimator is examined via intensive simulation studies. The methods are applied to the Nutritional Biomarkers Study of the Women’s Health Initiative. PMID:27546625

  3. Study of Linearization of Optical Polymer Modulators

    DTIC Science & Technology

    2004-02-01

    To improve the Spur Free Dynamic Range of analog electro - optic modulators in the 10 GHz regime, techniques for improving the linearity of these...devices must be developed. This report discusses an investigation into electro - optic directional couplers that use variable coupling in polymer-based

  4. MULTIVARIATE LINEAR MIXED MODELS FOR MULTIPLE OUTCOMES. (R824757)

    EPA Science Inventory

    We propose a multivariate linear mixed (MLMM) for the analysis of multiple outcomes, which generalizes the latent variable model of Sammel and Ryan. The proposed model assumes a flexible correlation structure among the multiple outcomes, and allows a global test of the impact of ...

  5. Control logic to track the outputs of a command generator or randomly forced target

    NASA Technical Reports Server (NTRS)

    Trankle, T. L.; Bryson, A. E., Jr.

    1977-01-01

    A procedure is presented for synthesizing time-invariant control logic to cause the outputs of a linear plant to track the outputs of an unforced (or randomly forced) linear dynamic system. The control logic uses feed-forward of the reference system state variables and feedback of the plant state variables. The feed-forward gains are obtained from the solution of a linear algebraic matrix equation of the Liapunov type. The feedback gains are the usual regulator gains, determined to stabilize (or augment the stability of) the plant, possibly including integral control. The method is applied here to the design of control logic for a second-order servomechanism to follow a linearly increasing (ramp) signal, an unstable third-order system with two controls to track two separate ramp signals, and a sixth-order system with two controls to track a constant signal and an exponentially decreasing signal (aircraft landing-flare or glide-slope-capture with constant velocity).

  6. Non-linear modelling and control of semi-active suspensions with variable damping

    NASA Astrophysics Data System (ADS)

    Chen, Huang; Long, Chen; Yuan, Chao-Chun; Jiang, Hao-Bin

    2013-10-01

    Electro-hydraulic dampers can provide variable damping force that is modulated by varying the command current; furthermore, they offer advantages such as lower power, rapid response, lower cost, and simple hardware. However, accurate characterisation of non-linear f-v properties in pre-yield and force saturation in post-yield is still required. Meanwhile, traditional linear or quarter vehicle models contain various non-linearities. The development of a multi-body dynamics model is very complex, and therefore, SIMPACK was used with suitable improvements for model development and numerical simulations. A semi-active suspension was built based on a belief-desire-intention (BDI)-agent model framework. Vehicle handling dynamics were analysed, and a co-simulation analysis was conducted in SIMPACK and MATLAB to evaluate the BDI-agent controller. The design effectively improved ride comfort, handling stability, and driving safety. A rapid control prototype was built based on dSPACE to conduct a real vehicle test. The test and simulation results were consistent, which verified the simulation.

  7. Electricity Consumption in the Industrial Sector of Jordan: Application of Multivariate Linear Regression and Adaptive Neuro-Fuzzy Techniques

    NASA Astrophysics Data System (ADS)

    Samhouri, M.; Al-Ghandoor, A.; Fouad, R. H.

    2009-08-01

    In this study two techniques, for modeling electricity consumption of the Jordanian industrial sector, are presented: (i) multivariate linear regression and (ii) neuro-fuzzy models. Electricity consumption is modeled as function of different variables such as number of establishments, number of employees, electricity tariff, prevailing fuel prices, production outputs, capacity utilizations, and structural effects. It was found that industrial production and capacity utilization are the most important variables that have significant effect on future electrical power demand. The results showed that both the multivariate linear regression and neuro-fuzzy models are generally comparable and can be used adequately to simulate industrial electricity consumption. However, comparison that is based on the square root average squared error of data suggests that the neuro-fuzzy model performs slightly better for future prediction of electricity consumption than the multivariate linear regression model. Such results are in full agreement with similar work, using different methods, for other countries.

  8. Non-linear feedback control of the p53 protein-mdm2 inhibitor system using the derivative-free non-linear Kalman filter.

    PubMed

    Rigatos, Gerasimos G

    2016-06-01

    It is proven that the model of the p53-mdm2 protein synthesis loop is a differentially flat one and using a diffeomorphism (change of state variables) that is proposed by differential flatness theory it is shown that the protein synthesis model can be transformed into the canonical (Brunovsky) form. This enables the design of a feedback control law that maintains the concentration of the p53 protein at the desirable levels. To estimate the non-measurable elements of the state vector describing the p53-mdm2 system dynamics, the derivative-free non-linear Kalman filter is used. Moreover, to compensate for modelling uncertainties and external disturbances that affect the p53-mdm2 system, the derivative-free non-linear Kalman filter is re-designed as a disturbance observer. The derivative-free non-linear Kalman filter consists of the Kalman filter recursion applied on the linearised equivalent of the protein synthesis model together with an inverse transformation based on differential flatness theory that enables to retrieve estimates for the state variables of the initial non-linear model. The proposed non-linear feedback control and perturbations compensation method for the p53-mdm2 system can result in more efficient chemotherapy schemes where the infusion of medication will be better administered.

  9. Statistical Techniques for Analyzing Process or "Similarity" Data in TID Hardness Assurance

    NASA Technical Reports Server (NTRS)

    Ladbury, R.

    2010-01-01

    We investigate techniques for estimating the contributions to TID hardness variability for families of linear bipolar technologies, determining how part-to-part and lot-to-lot variability change for different part types in the process.

  10. Early Parallel Activation of Semantics and Phonology in Picture Naming: Evidence from a Multiple Linear Regression MEG Study

    PubMed Central

    Miozzo, Michele; Pulvermüller, Friedemann; Hauk, Olaf

    2015-01-01

    The time course of brain activation during word production has become an area of increasingly intense investigation in cognitive neuroscience. The predominant view has been that semantic and phonological processes are activated sequentially, at about 150 and 200–400 ms after picture onset. Although evidence from prior studies has been interpreted as supporting this view, these studies were arguably not ideally suited to detect early brain activation of semantic and phonological processes. We here used a multiple linear regression approach to magnetoencephalography (MEG) analysis of picture naming in order to investigate early effects of variables specifically related to visual, semantic, and phonological processing. This was combined with distributed minimum-norm source estimation and region-of-interest analysis. Brain activation associated with visual image complexity appeared in occipital cortex at about 100 ms after picture presentation onset. At about 150 ms, semantic variables became physiologically manifest in left frontotemporal regions. In the same latency range, we found an effect of phonological variables in the left middle temporal gyrus. Our results demonstrate that multiple linear regression analysis is sensitive to early effects of multiple psycholinguistic variables in picture naming. Crucially, our results suggest that access to phonological information might begin in parallel with semantic processing around 150 ms after picture onset. PMID:25005037

  11. Secondary Students' Considerations of Variability in Measurement Activities Based on Authentic Practices

    ERIC Educational Resources Information Center

    Dierdorp, Adri; Bakker, Arthur; Ben-Zvi, Dani; Makar, Katie

    2017-01-01

    Measurement activities were designed in this study on the basis of authentic professional practices in which linear regression is used, to study considerations of variability by students in Grade 12 (aged 17-18). The question addressed in this article is: In what ways do secondary students consider variability within these measurement activities?…

  12. Multiplicity fluctuation analysis of target residues in nucleus-emulsion collisions at a few hundred MeV/nucleon

    NASA Astrophysics Data System (ADS)

    Zhang, Dong-Hai; Chen, Yan-Ling; Wang, Guo-Rong; Li, Wang-Dong; Wang, Qing; Yao, Ji-Jie; Zhou, Jian-Guo; Zheng, Su-Hua; Xu, Li-Ling; Miao, Hui-Feng; Wang, Peng

    2014-07-01

    Multiplicity fluctuation of the target evaporated fragments emitted in 290 MeV/u 12C-AgBr, 400 MeV/u 12C-AgBr, 400 MeV/u 20Ne-AgBr and 500 MeV/u 56Fe-AgBr interactions is investigated using the scaled factorial moment method in two-dimensional normal phase space and cumulative variable space, respectively. It is found that in normal phase space the scaled factorial moment (ln) increases linearly with the increase of the divided number of phase space (lnM)for lower q-value and increases linearly with the increase of lnM, and then becomes saturated or decreased for a higher q-value. In cumulative variable space ln decreases linearly with increase of lnM. This indicates that no evidence of non-statistical multiplicity fluctuation is observed in our data sets. So, any fluctuation indicated in the results of normal variable space analysis is totally caused by the non-uniformity of the single-particle density distribution.

  13. Commercial video frame rates can produce reliable results for both normal and CP spastic gait's spatiotemporal, angular, and linear displacement variables.

    PubMed

    Nikodelis, Thomas; Moscha, Dimitra; Metaxiotis, Dimitris; Kollias, Iraklis

    2011-08-01

    To investigate what sampling frequency is adequate for gait, the correlation of spatiotemporal parameters and the kinematic differences, between normal and CP spastic gait, for three sampling frequencies (100 Hz, 50 Hz, 25 Hz) were assessed. Spatiotemporal, angular, and linear displacement variables in the sagittal plane along with their 1st and 2nd derivatives were analyzed. Spatiotemporal stride parameters were highly correlated among the three sampling frequencies. The statistical model (2 × 3 ANOVA) gave no interactions between the factors group and frequency, indicating that group differences were invariant of sampling frequency. Lower frequencies led to smoother curves for all the variables, with a loss of information though, especially for the 2nd derivatives, having a homologous effect as the one of oversmoothing. It is proposed that in the circumstance that only spatiotemporal stride parameters, as well as angular and linear displacements are to be used, in gait reports, then commercial video camera speeds (25/30 Hz, 50/60 Hz when deinterlaced) can be considered as a low-cost solution to produce acceptable results.

  14. Evaluation of confidence intervals for a steady-state leaky aquifer model

    USGS Publications Warehouse

    Christensen, S.; Cooley, R.L.

    1999-01-01

    The fact that dependent variables of groundwater models are generally nonlinear functions of model parameters is shown to be a potentially significant factor in calculating accurate confidence intervals for both model parameters and functions of the parameters, such as the values of dependent variables calculated by the model. The Lagrangian method of Vecchia and Cooley [Vecchia, A.V. and Cooley, R.L., Water Resources Research, 1987, 23(7), 1237-1250] was used to calculate nonlinear Scheffe-type confidence intervals for the parameters and the simulated heads of a steady-state groundwater flow model covering 450 km2 of a leaky aquifer. The nonlinear confidence intervals are compared to corresponding linear intervals. As suggested by the significant nonlinearity of the regression model, linear confidence intervals are often not accurate. The commonly made assumption that widths of linear confidence intervals always underestimate the actual (nonlinear) widths was not correct. Results show that nonlinear effects can cause the nonlinear intervals to be asymmetric and either larger or smaller than the linear approximations. Prior information on transmissivities helps reduce the size of the confidence intervals, with the most notable effects occurring for the parameters on which there is prior information and for head values in parameter zones for which there is prior information on the parameters.The fact that dependent variables of groundwater models are generally nonlinear functions of model parameters is shown to be a potentially significant factor in calculating accurate confidence intervals for both model parameters and functions of the parameters, such as the values of dependent variables calculated by the model. The Lagrangian method of Vecchia and Cooley was used to calculate nonlinear Scheffe-type confidence intervals for the parameters and the simulated heads of a steady-state groundwater flow model covering 450 km2 of a leaky aquifer. The nonlinear confidence intervals are compared to corresponding linear intervals. As suggested by the significant nonlinearity of the regression model, linear confidence intervals are often not accurate. The commonly made assumption that widths of linear confidence intervals always underestimate the actual (nonlinear) widths was not correct. Results show that nonlinear effects can cause the nonlinear intervals to be asymmetric and either larger or smaller than the linear approximations. Prior information on transmissivities helps reduce the size of the confidence intervals, with the most notable effects occurring for the parameters on which there is prior information and for head values in parameter zones for which there is prior information on the parameters.

  15. Study of a control strategy for grid side converter in doubly- fed wind power system

    NASA Astrophysics Data System (ADS)

    Zhu, D. J.; Tan, Z. L.; Yuan, F.; Wang, Q. Y.; Ding, M.

    2016-08-01

    The grid side converter is an important part of the excitation system of doubly-fed asynchronous generator used in wind power system. As a three-phase voltage source PWM converter, it can not only transfer slip power in the form of active power, but also adjust the reactive power of the grid. This paper proposed a control approach for improving its performance. In this control approach, the dc voltage is regulated by a sliding mode variable structure control scheme and current by a variable structure controller based on the input output linearization. The theoretical bases of the sliding mode variable structure control were introduced, and the stability proof was presented. Switching function of the system has been deduced, sliding mode voltage controller model has been established, and the output of the outer voltage loop is the instruction of the inner current loop. Affine nonlinear model of two input two output equations on d-q axis for current has been established its meeting conditions of exact linearization were proved. In order to improve the anti-jamming capability of the system, a variable structure control was added in the current controller, the control law was deduced. The dual-loop control with sliding mode control in outer voltage loop and linearization variable structure control in inner current loop was proposed. Simulation results demonstrate the effectiveness of the proposed control strategy even during the dc reference voltage and system load variation.

  16. Using Copulas in the Estimation of the Economic Project Value in the Mining Industry, Including Geological Variability

    NASA Astrophysics Data System (ADS)

    Krysa, Zbigniew; Pactwa, Katarzyna; Wozniak, Justyna; Dudek, Michal

    2017-12-01

    Geological variability is one of the main factors that has an influence on the viability of mining investment projects and on the technical risk of geology projects. In the current scenario, analyses of economic viability of new extraction fields have been performed for the KGHM Polska Miedź S.A. underground copper mine at Fore Sudetic Monocline with the assumption of constant averaged content of useful elements. Research presented in this article is aimed at verifying the value of production from copper and silver ore for the same economic background with the use of variable cash flows resulting from the local variability of useful elements. Furthermore, the ore economic model is investigated for a significant difference in model value estimated with the use of linear correlation between useful elements content and the height of mine face, and the approach in which model parameters correlation is based upon the copula best matched information capacity criterion. The use of copula allows the simulation to take into account the multi variable dependencies at the same time, thereby giving a better reflection of the dependency structure, which linear correlation does not take into account. Calculation results of the economic model used for deposit value estimation indicate that the correlation between copper and silver estimated with the use of copula generates higher variation of possible project value, as compared to modelling correlation based upon linear correlation. Average deposit value remains unchanged.

  17. Interactions between Canopy Structure and Herbaceous Biomass along Environmental Gradients in Moist Forest and Dry Miombo Woodland of Tanzania.

    PubMed

    Shirima, Deo D; Pfeifer, Marion; Platts, Philip J; Totland, Ørjan; Moe, Stein R

    2015-01-01

    We have limited understanding of how tropical canopy foliage varies along environmental gradients, and how this may in turn affect forest processes and functions. Here, we analyse the relationships between canopy leaf area index (LAI) and above ground herbaceous biomass (AGBH) along environmental gradients in a moist forest and miombo woodland in Tanzania. We recorded canopy structure and herbaceous biomass in 100 permanent vegetation plots (20 m × 40 m), stratified by elevation. We quantified tree species richness, evenness, Shannon diversity and predominant height as measures of structural variability, and disturbance (tree stumps), soil nutrients and elevation as indicators of environmental variability. Moist forest and miombo woodland differed substantially with respect to nearly all variables tested. Both structural and environmental variables were found to affect LAI and AGBH, the latter being additionally dependent on LAI in moist forest but not in miombo, where other factors are limiting. Combining structural and environmental predictors yielded the most powerful models. In moist forest, they explained 76% and 25% of deviance in LAI and AGBH, respectively. In miombo woodland, they explained 82% and 45% of deviance in LAI and AGBH. In moist forest, LAI increased non-linearly with predominant height and linearly with tree richness, and decreased with soil nitrogen except under high disturbance. Miombo woodland LAI increased linearly with stem density, soil phosphorous and nitrogen, and decreased linearly with tree species evenness. AGBH in moist forest decreased with LAI at lower elevations whilst increasing slightly at higher elevations. AGBH in miombo woodland increased linearly with soil nitrogen and soil pH. Overall, moist forest plots had denser canopies and lower AGBH compared with miombo plots. Further field studies are encouraged, to disentangle the direct influence of LAI on AGBH from complex interrelationships between stand structure, environmental gradients and disturbance in African forests and woodlands.

  18. A FORTRAN program for multivariate survival analysis on the personal computer.

    PubMed

    Mulder, P G

    1988-01-01

    In this paper a FORTRAN program is presented for multivariate survival or life table regression analysis in a competing risks' situation. The relevant failure rate (for example, a particular disease or mortality rate) is modelled as a log-linear function of a vector of (possibly time-dependent) explanatory variables. The explanatory variables may also include the variable time itself, which is useful for parameterizing piecewise exponential time-to-failure distributions in a Gompertz-like or Weibull-like way as a more efficient alternative to Cox's proportional hazards model. Maximum likelihood estimates of the coefficients of the log-linear relationship are obtained from the iterative Newton-Raphson method. The program runs on a personal computer under DOS; running time is quite acceptable, even for large samples.

  19. Rate-compatible protograph LDPC code families with linear minimum distance

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush (Inventor); Dolinar, Jr., Samuel J. (Inventor); Jones, Christopher R. (Inventor)

    2012-01-01

    Digital communication coding methods are shown, which generate certain types of low-density parity-check (LDPC) codes built from protographs. A first method creates protographs having the linear minimum distance property and comprising at least one variable node with degree less than 3. A second method creates families of protographs of different rates, all structurally identical for all rates except for a rate-dependent designation of certain variable nodes as transmitted or non-transmitted. A third method creates families of protographs of different rates, all structurally identical for all rates except for a rate-dependent designation of the status of certain variable nodes as non-transmitted or set to zero. LDPC codes built from the protographs created by these methods can simultaneously have low error floors and low iterative decoding thresholds.

  20. A Block-LU Update for Large-Scale Linear Programming

    DTIC Science & Technology

    1990-01-01

    linear programming problems. Results are given from runs on the Cray Y -MP. 1. Introduction We wish to use the simplex method [Dan63] to solve the...standard linear program, minimize cTx subject to Ax = b 1< x <U, where A is an m by n matrix and c, x, 1, u, and b are of appropriate dimension. The simplex...the identity matrix. The basis is used to solve for the search direction y and the dual variables 7r in the following linear systems: Bky = aq (1.2) and

  1. Variability of Radiosonde-Observed Precipitable Water in the Baltic Region

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jakobson, Erko; Ohvril, H.; Okulov, O.

    The total mass of columnar water vapor (precipitable water, W) is an important parameter of atmospheric thermodynamic and radiative models. In this work radiosonde observations from 17 aerological stations in the Baltic region during 14 years, 1989?2002, were used to examine the variability of precipitable water. A table of monthly and annual means of W for the stations is given. Seasonal and annual means of W are expressed as linear functions of geographical latitude. Linear formulas are also derived for parameterization of precipitable water as function of surface water vapor pressure at each station.

  2. Enhancing sparsity of Hermite polynomial expansions by iterative rotations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Xiu; Lei, Huan; Baker, Nathan A.

    2016-02-01

    Compressive sensing has become a powerful addition to uncertainty quantification in recent years. This paper identifies new bases for random variables through linear mappings such that the representation of the quantity of interest is more sparse with new basis functions associated with the new random variables. This sparsity increases both the efficiency and accuracy of the compressive sensing-based uncertainty quantification method. Specifically, we consider rotation- based linear mappings which are determined iteratively for Hermite polynomial expansions. We demonstrate the effectiveness of the new method with applications in solving stochastic partial differential equations and high-dimensional (O(100)) problems.

  3. FIRE: an SPSS program for variable selection in multiple linear regression analysis via the relative importance of predictors.

    PubMed

    Lorenzo-Seva, Urbano; Ferrando, Pere J

    2011-03-01

    We provide an SPSS program that implements currently recommended techniques and recent developments for selecting variables in multiple linear regression analysis via the relative importance of predictors. The approach consists of: (1) optimally splitting the data for cross-validation, (2) selecting the final set of predictors to be retained in the equation regression, and (3) assessing the behavior of the chosen model using standard indices and procedures. The SPSS syntax, a short manual, and data files related to this article are available as supplemental materials from brm.psychonomic-journals.org/content/supplemental.

  4. Thermal annealing induced the tunable optical properties of silver thin films with linear variable thickness

    NASA Astrophysics Data System (ADS)

    Hong, Ruijin; Shao, Wen; Ji, Jialin; Tao, Chunxian; Zhang, Dawei

    2018-06-01

    Silver thin films with linear variable thickness were deposited at room temperature. The corresponding tunability of optical properties and Raman scattering intensity were realized by thermal annealing process. With the thickness increasing, the topography of as-annealed silver thin films was observed to develop from discontinued nanospheres into continuous structure with a redshift of the surface plasmon resonance wavelength in visible region. Both the various nanosphere sizes and states of aggregation of as-annealed silver thin films contributed to significantly increasing the sensitivity of surface enhanced Raman scattering (SERS).

  5. State Space Model with hidden variables for reconstruction of gene regulatory networks.

    PubMed

    Wu, Xi; Li, Peng; Wang, Nan; Gong, Ping; Perkins, Edward J; Deng, Youping; Zhang, Chaoyang

    2011-01-01

    State Space Model (SSM) is a relatively new approach to inferring gene regulatory networks. It requires less computational time than Dynamic Bayesian Networks (DBN). There are two types of variables in the linear SSM, observed variables and hidden variables. SSM uses an iterative method, namely Expectation-Maximization, to infer regulatory relationships from microarray datasets. The hidden variables cannot be directly observed from experiments. How to determine the number of hidden variables has a significant impact on the accuracy of network inference. In this study, we used SSM to infer Gene regulatory networks (GRNs) from synthetic time series datasets, investigated Bayesian Information Criterion (BIC) and Principle Component Analysis (PCA) approaches to determining the number of hidden variables in SSM, and evaluated the performance of SSM in comparison with DBN. True GRNs and synthetic gene expression datasets were generated using GeneNetWeaver. Both DBN and linear SSM were used to infer GRNs from the synthetic datasets. The inferred networks were compared with the true networks. Our results show that inference precision varied with the number of hidden variables. For some regulatory networks, the inference precision of DBN was higher but SSM performed better in other cases. Although the overall performance of the two approaches is compatible, SSM is much faster and capable of inferring much larger networks than DBN. This study provides useful information in handling the hidden variables and improving the inference precision.

  6. Optimal sensor placement for control of a supersonic mixed-compression inlet with variable geometry

    NASA Astrophysics Data System (ADS)

    Moore, Kenneth Thomas

    A method of using fluid dynamics models for the generation of models that are useable for control design and analysis is investigated. The problem considered is the control of the normal shock location in the VDC inlet, which is a mixed-compression, supersonic, variable-geometry inlet of a jet engine. A quasi-one-dimensional set of fluid equations incorporating bleed and moving walls is developed. An object-oriented environment is developed for simulation of flow systems under closed-loop control. A public interface between the controller and fluid classes is defined. A linear model representing the dynamics of the VDC inlet is developed from the finite difference equations, and its eigenstructure is analyzed. The order of this model is reduced using the square root balanced model reduction method to produce a reduced-order linear model that is suitable for control design and analysis tasks. A modification to this method that improves the accuracy of the reduced-order linear model for the purpose of sensor placement is presented and analyzed. The reduced-order linear model is used to develop a sensor placement method that quantifies as a function of the sensor location the ability of a sensor to provide information on the variable of interest for control. This method is used to develop a sensor placement metric for the VDC inlet. The reduced-order linear model is also used to design a closed loop control system to control the shock position in the VDC inlet. The object-oriented simulation code is used to simulate the nonlinear fluid equations under closed-loop control.

  7. Process fault detection and nonlinear time series analysis for anomaly detection in safeguards

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burr, T.L.; Mullen, M.F.; Wangen, L.E.

    In this paper we discuss two advanced techniques, process fault detection and nonlinear time series analysis, and apply them to the analysis of vector-valued and single-valued time-series data. We investigate model-based process fault detection methods for analyzing simulated, multivariate, time-series data from a three-tank system. The model-predictions are compared with simulated measurements of the same variables to form residual vectors that are tested for the presence of faults (possible diversions in safeguards terminology). We evaluate two methods, testing all individual residuals with a univariate z-score and testing all variables simultaneously with the Mahalanobis distance, for their ability to detect lossmore » of material from two different leak scenarios from the three-tank system: a leak without and with replacement of the lost volume. Nonlinear time-series analysis tools were compared with the linear methods popularized by Box and Jenkins. We compare prediction results using three nonlinear and two linear modeling methods on each of six simulated time series: two nonlinear and four linear. The nonlinear methods performed better at predicting the nonlinear time series and did as well as the linear methods at predicting the linear values.« less

  8. A multiple linear regression analysis of hot corrosion attack on a series of nickel base turbine alloys

    NASA Technical Reports Server (NTRS)

    Barrett, C. A.

    1985-01-01

    Multiple linear regression analysis was used to determine an equation for estimating hot corrosion attack for a series of Ni base cast turbine alloys. The U transform (i.e., 1/sin (% A/100) to the 1/2) was shown to give the best estimate of the dependent variable, y. A complete second degree equation is described for the centered" weight chemistries for the elements Cr, Al, Ti, Mo, W, Cb, Ta, and Co. In addition linear terms for the minor elements C, B, and Zr were added for a basic 47 term equation. The best reduced equation was determined by the stepwise selection method with essentially 13 terms. The Cr term was found to be the most important accounting for 60 percent of the explained variability hot corrosion attack.

  9. Scale of association: hierarchical linear models and the measurement of ecological systems

    Treesearch

    Sean M. McMahon; Jeffrey M. Diez

    2007-01-01

    A fundamental challenge to understanding patterns in ecological systems lies in employing methods that can analyse, test and draw inference from measured associations between variables across scales. Hierarchical linear models (HLM) use advanced estimation algorithms to measure regression relationships and variance-covariance parameters in hierarchically structured...

  10. Distillation of squeezing from non-Gaussian quantum states.

    PubMed

    Heersink, J; Marquardt, Ch; Dong, R; Filip, R; Lorenz, S; Leuchs, G; Andersen, U L

    2006-06-30

    We show that single copy distillation of squeezing from continuous variable non-Gaussian states is possible using linear optics and conditional homodyne detection. A specific non-Gaussian noise source, corresponding to a random linear displacement, is investigated experimentally. Conditioning the signal on a tap measurement, we observe probabilistic recovery of squeezing.

  11. Observed Score Linear Equating with Covariates

    ERIC Educational Resources Information Center

    Branberg, Kenny; Wiberg, Marie

    2011-01-01

    This paper examined observed score linear equating in two different data collection designs, the equivalent groups design and the nonequivalent groups design, when information from covariates (i.e., background variables correlated with the test scores) was included. The main purpose of the study was to examine the effect (i.e., bias, variance, and…

  12. Identifying the Factors That Influence Change in SEBD Using Logistic Regression Analysis

    ERIC Educational Resources Information Center

    Camilleri, Liberato; Cefai, Carmel

    2013-01-01

    Multiple linear regression and ANOVA models are widely used in applications since they provide effective statistical tools for assessing the relationship between a continuous dependent variable and several predictors. However these models rely heavily on linearity and normality assumptions and they do not accommodate categorical dependent…

  13. A study of the use of linear programming techniques to improve the performance in design optimization problems

    NASA Technical Reports Server (NTRS)

    Young, Katherine C.; Sobieszczanski-Sobieski, Jaroslaw

    1988-01-01

    This project has two objectives. The first is to determine whether linear programming techniques can improve performance when handling design optimization problems with a large number of design variables and constraints relative to the feasible directions algorithm. The second purpose is to determine whether using the Kreisselmeier-Steinhauser (KS) function to replace the constraints with one constraint will reduce the cost of total optimization. Comparisons are made using solutions obtained with linear and non-linear methods. The results indicate that there is no cost saving using the linear method or in using the KS function to replace constraints.

  14. A fresh look at linear ordinary differential equations with constant coefficients. Revisiting the impulsive response method using factorization

    NASA Astrophysics Data System (ADS)

    Camporesi, Roberto

    2016-01-01

    We present an approach to the impulsive response method for solving linear constant-coefficient ordinary differential equations of any order based on the factorization of the differential operator. The approach is elementary, we only assume a basic knowledge of calculus and linear algebra. In particular, we avoid the use of distribution theory, as well as of the other more advanced approaches: Laplace transform, linear systems, the general theory of linear equations with variable coefficients and variation of parameters. The approach presented here can be used in a first course on differential equations for science and engineering majors.

  15. On Performance of Linear Multiuser Detectors for Wireless Multimedia Applications

    NASA Astrophysics Data System (ADS)

    Agarwal, Rekha; Reddy, B. V. R.; Bindu, E.; Nayak, Pinki

    In this paper, performance of different multi-rate schemes in DS-CDMA system is evaluated. The analysis of multirate linear multiuser detectors with multiprocessing gain is analyzed for synchronous Code Division Multiple Access (CDMA) systems. Variable data rate is achieved by varying the processing gain. Our conclusion is that bit error rate for multirate and single rate systems can be made same with a tradeoff with number of users in linear multiuser detectors.

  16. Commande de vol non lineaire d'un drone a voilure fixe par la methode du backstepping

    NASA Astrophysics Data System (ADS)

    Finoki, Edouard

    This thesis describes the design of a non-linear controller for a UAV using the backstepping method. It is a fixed-wing UAV, the NexSTAR ARF from HobbicoRTM. The aim is to find the expressions of the aileron, the elevator, and the rudder deflection in order to command the flight path angle, the heading angle and the sideslip angle. Controlling the flight path angle allows a steady, climb or descent flight, controlling the heading cap allows to choose the heading and annul the sideslip angle allows an efficient flight. A good technical control has to ensure the stability of the system and provide optimal performances. Backstepping interlaces the choice of a Lyapunov function with the design of feedback control. This control technique works with the true non-linear model without any approximation. The procedure is to transform intermediate state variables into virtual inputs which will control other state variables. Advantages of this technique are its recursivity, its minimum control effort and its cascaded structure that allows dividing a high order system into several simpler lower order systems. To design this non-linear controller, a non-linear model of the UAV was used. Equations of motion are very accurate, aerodynamic coefficients result from interpolations between several essential variables in flight. The controller has been implemented in Matlab/Simulink and FlightGear.

  17. Multivariate Linear Regression and CART Regression Analysis of TBM Performance at Abu Hamour Phase-I Tunnel

    NASA Astrophysics Data System (ADS)

    Jakubowski, J.; Stypulkowski, J. B.; Bernardeau, F. G.

    2017-12-01

    The first phase of the Abu Hamour drainage and storm tunnel was completed in early 2017. The 9.5 km long, 3.7 m diameter tunnel was excavated with two Earth Pressure Balance (EPB) Tunnel Boring Machines from Herrenknecht. TBM operation processes were monitored and recorded by Data Acquisition and Evaluation System. The authors coupled collected TBM drive data with available information on rock mass properties, cleansed, completed with secondary variables and aggregated by weeks and shifts. Correlations and descriptive statistics charts were examined. Multivariate Linear Regression and CART regression tree models linking TBM penetration rate (PR), penetration per revolution (PPR) and field penetration index (FPI) with TBM operational and geotechnical characteristics were performed for the conditions of the weak/soft rock of Doha. Both regression methods are interpretable and the data were screened with different computational approaches allowing enriched insight. The primary goal of the analysis was to investigate empirical relations between multiple explanatory and responding variables, to search for best subsets of explanatory variables and to evaluate the strength of linear and non-linear relations. For each of the penetration indices, a predictive model coupling both regression methods was built and validated. The resultant models appeared to be stronger than constituent ones and indicated an opportunity for more accurate and robust TBM performance predictions.

  18. A kinetic approach to some quasi-linear laws of macroeconomics

    NASA Astrophysics Data System (ADS)

    Gligor, M.; Ignat, M.

    2002-11-01

    Some previous works have presented the data on wealth and income distributions in developed countries and have found that the great majority of population is described by an exponential distribution, which results in idea that the kinetic approach could be adequate to describe this empirical evidence. The aim of our paper is to extend this framework by developing a systematic kinetic approach of the socio-economic systems and to explain how linear laws, modelling correlations between macroeconomic variables, may arise in this context. Firstly we construct the Boltzmann kinetic equation for an idealised system composed by many individuals (workers, officers, business men, etc.), each of them getting a certain income and spending money for their needs. To each individual a certain time variable amount of money is associated this meaning him/her phase space coordinate. In this way the exponential distribution of money in a closed economy is explicitly found. The extension of this result, including states near the equilibrium, give us the possibility to take into account the regular increase of the total amount of money, according to the modern economic theories. The Kubo-Green-Onsager linear response theory leads us to a set of linear equations between some macroeconomic variables. Finally, the validity of such laws is discussed in relation with the time reversal symmetry and is tested empirically using some macroeconomic time series.

  19. Frequency of Behavior Witnessed and Conformity in an Everyday Social Context

    PubMed Central

    Claidière, Nicolas; Bowler, Mark; Brookes, Sarah; Brown, Rebecca; Whiten, Andrew

    2014-01-01

    Conformity is thought to be an important force in human evolution because it has the potential to stabilize cultural homogeneity within groups and cultural diversity between groups. However, the effects of such conformity on cultural and biological evolution will depend much on the particular way in which individuals are influenced by the frequency of alternative behavioral options they witness. In a previous study we found that in a natural situation people displayed a tendency to be ‘linear-conformist’. When visitors to a Zoo exhibit were invited to write or draw answers to questions on cards to win a small prize and we manipulated the proportion of text versus drawings on display, we found a strong and significant effect of the proportion of text displayed on the proportion of text in the answers, a conformist effect that was largely linear with a small non-linear component. However, although this overall effect is important to understand cultural evolution, it might mask a greater diversity of behavioral responses shaped by variables such as age, sex, social environment and attention of the participants. Accordingly we performed a further study explicitly to analyze the effects of these variables, together with the quality of the information participants' responses made available to further visitors. Results again showed a largely linear conformity effect that varied little with the variables analyzed. PMID:24950212

  20. Static Analysis Numerical Algorithms

    DTIC Science & Technology

    2016-04-01

    represented by a collection of intervals (one for each variable) or a convex polyhedron (each dimension of the affine space representing a program variable...Another common abstract domain uses a set of linear constraints (i.e. an enclosing polyhedron ) to over-approximate the joint values of several

  1. The employment of Support Vector Machine to classify high and low performance archers based on bio-physiological variables

    NASA Astrophysics Data System (ADS)

    Taha, Zahari; Muazu Musa, Rabiu; Majeed, Anwar P. P. Abdul; Razali Abdullah, Mohamad; Amirul Abdullah, Muhammad; Hasnun Arif Hassan, Mohd; Khalil, Zubair

    2018-04-01

    The present study employs a machine learning algorithm namely support vector machine (SVM) to classify high and low potential archers from a collection of bio-physiological variables trained on different SVMs. 50 youth archers with the average age and standard deviation of (17.0 ±.056) gathered from various archery programmes completed a one end shooting score test. The bio-physiological variables namely resting heart rate, resting respiratory rate, resting diastolic blood pressure, resting systolic blood pressure, as well as calories intake, were measured prior to their shooting tests. k-means cluster analysis was applied to cluster the archers based on their scores on variables assessed. SVM models i.e. linear, quadratic and cubic kernel functions, were trained on the aforementioned variables. The k-means clustered the archers into high (HPA) and low potential archers (LPA), respectively. It was demonstrated that the linear SVM exhibited good accuracy with a classification accuracy of 94% in comparison the other tested models. The findings of this investigation can be valuable to coaches and sports managers to recognise high potential athletes from the selected bio-physiological variables examined.

  2. Multiresponse semiparametric regression for modelling the effect of regional socio-economic variables on the use of information technology

    NASA Astrophysics Data System (ADS)

    Wibowo, Wahyu; Wene, Chatrien; Budiantara, I. Nyoman; Permatasari, Erma Oktania

    2017-03-01

    Multiresponse semiparametric regression is simultaneous equation regression model and fusion of parametric and nonparametric model. The regression model comprise several models and each model has two components, parametric and nonparametric. The used model has linear function as parametric and polynomial truncated spline as nonparametric component. The model can handle both linearity and nonlinearity relationship between response and the sets of predictor variables. The aim of this paper is to demonstrate the application of the regression model for modeling of effect of regional socio-economic on use of information technology. More specific, the response variables are percentage of households has access to internet and percentage of households has personal computer. Then, predictor variables are percentage of literacy people, percentage of electrification and percentage of economic growth. Based on identification of the relationship between response and predictor variable, economic growth is treated as nonparametric predictor and the others are parametric predictors. The result shows that the multiresponse semiparametric regression can be applied well as indicate by the high coefficient determination, 90 percent.

  3. Controls on the variability of net infiltration to desert sandstone

    USGS Publications Warehouse

    Heilweil, Victor M.; McKinney, Tim S.; Zhdanov, Michael S.; Watt, Dennis E.

    2007-01-01

    As populations grow in arid climates and desert bedrock aquifers are increasingly targeted for future development, understanding and quantifying the spatial variability of net infiltration becomes critically important for accurately inventorying water resources and mapping contamination vulnerability. This paper presents a conceptual model of net infiltration to desert sandstone and then develops an empirical equation for its spatial quantification at the watershed scale using linear least squares inversion methods for evaluating controlling parameters (independent variables) based on estimated net infiltration rates (dependent variables). Net infiltration rates used for this regression analysis were calculated from environmental tracers in boreholes and more than 3000 linear meters of vadose zone excavations in an upland basin in southwestern Utah underlain by Navajo sandstone. Soil coarseness, distance to upgradient outcrop, and topographic slope were shown to be the primary physical parameters controlling the spatial variability of net infiltration. Although the method should be transferable to other desert sandstone settings for determining the relative spatial distribution of net infiltration, further study is needed to evaluate the effects of other potential parameters such as slope aspect, outcrop parameters, and climate on absolute net infiltration rates.

  4. Data analytics using canonical correlation analysis and Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Rickman, Jeffrey M.; Wang, Yan; Rollett, Anthony D.; Harmer, Martin P.; Compson, Charles

    2017-07-01

    A canonical correlation analysis is a generic parametric model used in the statistical analysis of data involving interrelated or interdependent input and output variables. It is especially useful in data analytics as a dimensional reduction strategy that simplifies a complex, multidimensional parameter space by identifying a relatively few combinations of variables that are maximally correlated. One shortcoming of the canonical correlation analysis, however, is that it provides only a linear combination of variables that maximizes these correlations. With this in mind, we describe here a versatile, Monte-Carlo based methodology that is useful in identifying non-linear functions of the variables that lead to strong input/output correlations. We demonstrate that our approach leads to a substantial enhancement of correlations, as illustrated by two experimental applications of substantial interest to the materials science community, namely: (1) determining the interdependence of processing and microstructural variables associated with doped polycrystalline aluminas, and (2) relating microstructural decriptors to the electrical and optoelectronic properties of thin-film solar cells based on CuInSe2 absorbers. Finally, we describe how this approach facilitates experimental planning and process control.

  5. Firmness prediction in Prunus persica 'Calrico' peaches by visible/short-wave near infrared spectroscopy and acoustic measurements using optimised linear and non-linear chemometric models.

    PubMed

    Lafuente, Victoria; Herrera, Luis J; Pérez, María del Mar; Val, Jesús; Negueruela, Ignacio

    2015-08-15

    In this work, near infrared spectroscopy (NIR) and an acoustic measure (AWETA) (two non-destructive methods) were applied in Prunus persica fruit 'Calrico' (n = 260) to predict Magness-Taylor (MT) firmness. Separate and combined use of these measures was evaluated and compared using partial least squares (PLS) and least squares support vector machine (LS-SVM) regression methods. Also, a mutual-information-based variable selection method, seeking to find the most significant variables to produce optimal accuracy of the regression models, was applied to a joint set of variables (NIR wavelengths and AWETA measure). The newly proposed combined NIR-AWETA model gave good values of the determination coefficient (R(2)) for PLS and LS-SVM methods (0.77 and 0.78, respectively), improving the reliability of MT firmness prediction in comparison with separate NIR and AWETA predictions. The three variables selected by the variable selection method (AWETA measure plus NIR wavelengths 675 and 697 nm) achieved R(2) values 0.76 and 0.77, PLS and LS-SVM. These results indicated that the proposed mutual-information-based variable selection algorithm was a powerful tool for the selection of the most relevant variables. © 2014 Society of Chemical Industry.

  6. Robust best linear estimator for Cox regression with instrumental variables in whole cohort and surrogates with additive measurement error in calibration sample.

    PubMed

    Wang, Ching-Yun; Song, Xiao

    2016-11-01

    Biomedical researchers are often interested in estimating the effect of an environmental exposure in relation to a chronic disease endpoint. However, the exposure variable of interest may be measured with errors. In a subset of the whole cohort, a surrogate variable is available for the true unobserved exposure variable. The surrogate variable satisfies an additive measurement error model, but it may not have repeated measurements. The subset in which the surrogate variables are available is called a calibration sample. In addition to the surrogate variables that are available among the subjects in the calibration sample, we consider the situation when there is an instrumental variable available for all study subjects. An instrumental variable is correlated with the unobserved true exposure variable, and hence can be useful in the estimation of the regression coefficients. In this paper, we propose a nonparametric method for Cox regression using the observed data from the whole cohort. The nonparametric estimator is the best linear combination of a nonparametric correction estimator from the calibration sample and the difference of the naive estimators from the calibration sample and the whole cohort. The asymptotic distribution is derived, and the finite sample performance of the proposed estimator is examined via intensive simulation studies. The methods are applied to the Nutritional Biomarkers Study of the Women's Health Initiative. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  7. Linear models for airborne-laser-scanning-based operational forest inventory with small field sample size and highly correlated LiDAR data

    USGS Publications Warehouse

    Junttila, Virpi; Kauranne, Tuomo; Finley, Andrew O.; Bradford, John B.

    2015-01-01

    Modern operational forest inventory often uses remotely sensed data that cover the whole inventory area to produce spatially explicit estimates of forest properties through statistical models. The data obtained by airborne light detection and ranging (LiDAR) correlate well with many forest inventory variables, such as the tree height, the timber volume, and the biomass. To construct an accurate model over thousands of hectares, LiDAR data must be supplemented with several hundred field sample measurements of forest inventory variables. This can be costly and time consuming. Different LiDAR-data-based and spatial-data-based sampling designs can reduce the number of field sample plots needed. However, problems arising from the features of the LiDAR data, such as a large number of predictors compared with the sample size (overfitting) or a strong correlation among predictors (multicollinearity), may decrease the accuracy and precision of the estimates and predictions. To overcome these problems, a Bayesian linear model with the singular value decomposition of predictors, combined with regularization, is proposed. The model performance in predicting different forest inventory variables is verified in ten inventory areas from two continents, where the number of field sample plots is reduced using different sampling designs. The results show that, with an appropriate field plot selection strategy and the proposed linear model, the total relative error of the predicted forest inventory variables is only 5%–15% larger using 50 field sample plots than the error of a linear model estimated with several hundred field sample plots when we sum up the error due to both the model noise variance and the model’s lack of fit.

  8. How Do Microphysical Processes Influence Large-Scale Precipitation Variability and Extremes?

    DOE PAGES

    Hagos, Samson; Ruby Leung, L.; Zhao, Chun; ...

    2018-02-10

    Convection permitting simulations using the Model for Prediction Across Scales-Atmosphere (MPAS-A) are used to examine how microphysical processes affect large-scale precipitation variability and extremes. An episode of the Madden-Julian Oscillation is simulated using MPAS-A with a refined region at 4-km grid spacing over the Indian Ocean. It is shown that cloud microphysical processes regulate the precipitable water (PW) statistics. Because of the non-linear relationship between precipitation and PW, PW exceeding a certain critical value (PWcr) contributes disproportionately to precipitation variability. However, the frequency of PW exceeding PWcr decreases rapidly with PW, so changes in microphysical processes that shift the columnmore » PW statistics relative to PWcr even slightly have large impacts on precipitation variability. Furthermore, precipitation variance and extreme precipitation frequency are approximately linearly related to the difference between the mean and critical PW values. Thus observed precipitation statistics could be used to directly constrain model microphysical parameters as this study demonstrates using radar observations from DYNAMO field campaign.« less

  9. Application of Statistic Experimental Design to Assess the Effect of Gammairradiation Pre-Treatment on the Drying Characteristics and Qualities of Wheat

    NASA Astrophysics Data System (ADS)

    Yu, Yong; Wang, Jun

    Wheat, pretreated by 60Co gamma irradiation, was dried by hot-air with irradiation dosage 0-3 kGy, drying temperature 40-60 °C, and initial moisture contents 19-25% (drying basis). The drying characteristics and dried qualities of wheat were evaluated based on drying time, average dehydration rate, wet gluten content (WGC), moisture content of wet gluten (MCWG)and titratable acidity (TA). A quadratic rotation-orthogonal composite experimental design, with three variables (at five levels) and five response functions, and analysis method were employed to study the effect of three variables on the individual response functions. The five response functions (drying time, average dehydration rate, WGC, MCWG, TA) correlated with these variables by second order polynomials consisting of linear, quadratic and interaction terms. A high correlation coefficient indicated the suitability of the second order polynomial to predict these response functions. The linear, interaction and quadratic effects of three variables on the five response functions were all studied.

  10. How Do Microphysical Processes Influence Large-Scale Precipitation Variability and Extremes?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hagos, Samson; Ruby Leung, L.; Zhao, Chun

    Convection permitting simulations using the Model for Prediction Across Scales-Atmosphere (MPAS-A) are used to examine how microphysical processes affect large-scale precipitation variability and extremes. An episode of the Madden-Julian Oscillation is simulated using MPAS-A with a refined region at 4-km grid spacing over the Indian Ocean. It is shown that cloud microphysical processes regulate the precipitable water (PW) statistics. Because of the non-linear relationship between precipitation and PW, PW exceeding a certain critical value (PWcr) contributes disproportionately to precipitation variability. However, the frequency of PW exceeding PWcr decreases rapidly with PW, so changes in microphysical processes that shift the columnmore » PW statistics relative to PWcr even slightly have large impacts on precipitation variability. Furthermore, precipitation variance and extreme precipitation frequency are approximately linearly related to the difference between the mean and critical PW values. Thus observed precipitation statistics could be used to directly constrain model microphysical parameters as this study demonstrates using radar observations from DYNAMO field campaign.« less

  11. Assessing environmental inequalities in ambient air pollution across urban Australia.

    PubMed

    Knibbs, Luke D; Barnett, Adrian G

    2015-04-01

    Identifying inequalities in air pollution levels across population groups can help address environmental justice concerns. We were interested in assessing these inequalities across major urban areas in Australia. We used a land-use regression model to predict ambient nitrogen dioxide (NO2) levels and sought the best socio-economic and population predictor variables. We used a generalised least squares model that accounted for spatial correlation in NO2 levels to examine the associations between the variables. We found that the best model included the index of economic resources (IER) score as a non-linear variable and the percentage of non-Indigenous persons as a linear variable. NO2 levels decreased with increasing IER scores (higher scores indicate less disadvantage) in almost all major urban areas, and NO2 also decreased slightly as the percentage of non-Indigenous persons increased. However, the magnitude of differences in NO2 levels was small and may not translate into substantive differences in health. Copyright © 2015 Elsevier Ltd. All rights reserved.

  12. Dynamic Coupling Between Respiratory and Cardiovascular System

    NASA Astrophysics Data System (ADS)

    Censi, Federica; Calcagnini, Giovanni; Cerutti, Sergio

    The analysis of non-linear dynamics of the coupling among interacting quantities can be very useful for understanding the cardiorespiratory and cardiovascular control mechanisms. In this chapter RP is used to detect and quantify the degree of non-linear coupling between respiration and spontaneous rhythms of both heart rate and blood pressure variability signals. RQA turned out to be suitable for a quantitative evaluation of the observed coupling patterns among rhythms, both in simulated and real data, providing different degrees of coupling. The results from the simulated data showed that the increased degree of coupling between the signals was marked by the increase of PR and PD, and by the decrease of ER. When the RQA was applied to experimental data, PD and ER turned out to be the most significant variables, compared to PR. A remarkable finding is the detection of transient 1:2 PL episodes between respiration and cardiovascular variability signals. This phenomenon can be associated to a sub-harmonic synchronization between the two main rhythms of HR and BP variability series.

  13. Influence of age on the correlations of hematological and biochemical variables with the stability of erythrocyte membrane in relation to sodium dodecyl sulfate.

    PubMed

    de Freitas, Mariana V; Marquez-Bernardes, Liandra F; de Arvelos, Letícia R; Paraíso, Lara F; Gonçalves E Oliveira, Ana Flávia M; Mascarenhas Netto, Rita de C; Neto, Morun Bernardino; Garrote-Filho, Mario S; de Souza, Paulo César A; Penha-Silva, Nilson

    2014-10-01

    To evaluate the influence of age on the relationships between biochemical and hematological variables and stability of erythrocyte membrane in relation to the sodium dodecyl sulfate (SDS) in population of 105 female volunteers between 20 and 90 years. The stability of RBC membrane was determined by non-linear regression of the dependency of the absorbance of hemoglobin released as a function of SDS concentration, represented by the half-transition point of the curve (D50) and the variation in the concentration of the detergent to promote lysis (dD). There was an age-dependent increase in the membrane stability in relation to SDS. Analyses by multiple linear regression showed that this stability increase is significantly related to the hematological variable red cell distribution width (RDW) and the biochemical variables blood albumin and cholesterol. The positive association between erythrocyte stability and RDW may reflect one possible mechanism involved in the clinical meaning of this hematological index.

  14. Global spectral irradiance variability and material discrimination at Boulder, Colorado.

    PubMed

    Pan, Zhihong; Healey, Glenn; Slater, David

    2003-03-01

    We analyze 7,258 global spectral irradiance functions over 0.4-2.2 microm that were acquired over a wide range of conditions at Boulder, Colorado, during the summer of 1997. We show that low-dimensional linear models can be used to capture the variability in these spectra over both the visible and the 0.4-2.2 microm spectral ranges. Using a linear model, we compare the Boulder data with the previous study of Judd et al. [J. Opt. Soc. Am. 54, 1031 (1964)] over the visible wavelengths. We also examine the agreement of the Boulder data with a spectral database generated by using the MODTRAN 4.0 radiative transfer code. We use a database of 223 minerals to consider the effect of the spectral variability in the global spectral irradiance functions on hyperspectral material identification. We show that the 223 minerals can be discriminated accurately over the variability in the Boulder data with subspace projection techniques.

  15. Rapid timing studies of black hole binaries in Optical and X-rays: correlated and non-linear variability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gandhi, P.; Dhillon, V. S.; Durant, M.

    2010-07-15

    In a fast multi-wavelength timing study of black hole X-ray binaries (BHBs), we have discovered correlated optical and X-ray variability in the low/hard state of two sources: GX 339-4 and SWIFT J1753.5-0127. After XTE J1118+480, these are the only BHBs currently known to show rapid (sub-second) aperiodic optical flickering. Our simultaneous VLT/Ultracam and RXTE data reveal intriguing patterns with characteristic peaks, dips and lags down to very short timescales. Simple linear reprocessing models can be ruled out as the origin of the rapid, aperiodic optical power in both sources. A magnetic energy release model with fast interactions between the disk,more » jet and corona can explain the complex correlation patterns. We also show that in both the optical and X-ray light curves, the absolute source variability r.m.s. amplitude linearly increases with flux, and that the flares have a log-normal distribution. The implication is that variability at both wavelengths is not due to local fluctuations alone, but rather arises as a result of coupling of perturbations over a wide range of radii and timescales. These 'optical and X-ray rms-flux relations' thus provide new constraints to connect the outer and inner parts of the accretion flow, and the jet.« less

  16. Application of recurrence quantification analysis to automatically estimate infant sleep states using a single channel of respiratory data.

    PubMed

    Terrill, Philip I; Wilson, Stephen J; Suresh, Sadasivam; Cooper, David M; Dakin, Carolyn

    2012-08-01

    Previous work has identified that non-linear variables calculated from respiratory data vary between sleep states, and that variables derived from the non-linear analytical tool recurrence quantification analysis (RQA) are accurate infant sleep state discriminators. This study aims to apply these discriminators to automatically classify 30 s epochs of infant sleep as REM, non-REM and wake. Polysomnograms were obtained from 25 healthy infants at 2 weeks, 3, 6 and 12 months of age, and manually sleep staged as wake, REM and non-REM. Inter-breath interval data were extracted from the respiratory inductive plethysmograph, and RQA applied to calculate radius, determinism and laminarity. Time-series statistic and spectral analysis variables were also calculated. A nested cross-validation method was used to identify the optimal feature subset, and to train and evaluate a linear discriminant analysis-based classifier. The RQA features radius and laminarity and were reliably selected. Mean agreement was 79.7, 84.9, 84.0 and 79.2 % at 2 weeks, 3, 6 and 12 months, and the classifier performed better than a comparison classifier not including RQA variables. The performance of this sleep-staging tool compares favourably with inter-human agreement rates, and improves upon previous systems using only respiratory data. Applications include diagnostic screening and population-based sleep research.

  17. Controlling Continuous-Variable Quantum Key Distribution with Entanglement in the Middle Using Tunable Linear Optics Cloning Machines

    NASA Astrophysics Data System (ADS)

    Wu, Xiao Dong; Chen, Feng; Wu, Xiang Hua; Guo, Ying

    2017-02-01

    Continuous-variable quantum key distribution (CVQKD) can provide detection efficiency, as compared to discrete-variable quantum key distribution (DVQKD). In this paper, we demonstrate a controllable CVQKD with the entangled source in the middle, contrast to the traditional point-to-point CVQKD where the entanglement source is usually created by one honest party and the Gaussian noise added on the reference partner of the reconciliation is uncontrollable. In order to harmonize the additive noise that originates in the middle to resist the effect of malicious eavesdropper, we propose a controllable CVQKD protocol by performing a tunable linear optics cloning machine (LOCM) at one participant's side, say Alice. Simulation results show that we can achieve the optimal secret key rates by selecting the parameters of the tuned LOCM in the derived regions.

  18. [Optimal extraction of effective constituents from Aralia elata by central composite design and response surface methodology].

    PubMed

    Lv, Shao-Wa; Liu, Dong; Hu, Pan-Pan; Ye, Xu-Yan; Xiao, Hong-Bin; Kuang, Hai-Xue

    2010-03-01

    To optimize the process of extracting effective constituents from Aralia elata by response surface methodology. The independent variables were ethanol concentration, reflux time and solvent fold, the dependent variable was extraction rate of total saponins in Aralia elata. Linear or no-linear mathematic models were used to estimate the relationship between independent and dependent variables. Response surface methodology was used to optimize the process of extraction. The prediction was carried out through comparing the observed and predicted values. Regression coefficient of binomial fitting complex model was as high as 0.9617, the optimum conditions of extraction process were 70% ethanol, 2.5 hours for reflux, 20-fold solvent and 3 times for extraction. The bias between observed and predicted values was -2.41%. It shows the optimum model is highly predictive.

  19. Solution of the finite Milne problem in stochastic media with RVT Technique

    NASA Astrophysics Data System (ADS)

    Slama, Howida; El-Bedwhey, Nabila A.; El-Depsy, Alia; Selim, Mustafa M.

    2017-12-01

    This paper presents the solution to the Milne problem in the steady state with isotropic scattering phase function. The properties of the medium are considered as stochastic ones with Gaussian or exponential distributions and hence the problem treated as a stochastic integro-differential equation. To get an explicit form for the radiant energy density, the linear extrapolation distance, reflectivity and transmissivity in the deterministic case the problem is solved using the Pomraning-Eddington method. The obtained solution is found to be dependent on the optical space variable and thickness of the medium which are considered as random variables. The random variable transformation (RVT) technique is used to find the first probability density function (1-PDF) of the solution process. Then the stochastic linear extrapolation distance, reflectivity and transmissivity are calculated. For illustration, numerical results with conclusions are provided.

  20. Multivariate calibration on NIR data: development of a model for the rapid evaluation of ethanol content in bakery products.

    PubMed

    Bello, Alessandra; Bianchi, Federica; Careri, Maria; Giannetto, Marco; Mori, Giovanni; Musci, Marilena

    2007-11-05

    A new NIR method based on multivariate calibration for determination of ethanol in industrially packed wholemeal bread was developed and validated. GC-FID was used as reference method for the determination of actual ethanol concentration of different samples of wholemeal bread with proper content of added ethanol, ranging from 0 to 3.5% (w/w). Stepwise discriminant analysis was carried out on the NIR dataset, in order to reduce the number of original variables by selecting those that were able to discriminate between the samples of different ethanol concentrations. With the so selected variables a multivariate calibration model was then obtained by multiple linear regression. The prediction power of the linear model was optimized by a new "leave one out" method, so that the number of original variables resulted further reduced.

  1. A computing method for sound propagation through a nonuniform jet stream

    NASA Technical Reports Server (NTRS)

    Padula, S. L.; Liu, C. H.

    1974-01-01

    The classical formulation of sound propagation through a jet flow was found to be inadequate for computer solutions. Previous investigations selected the phase and amplitude of the acoustic pressure as dependent variables requiring the solution of a system of nonlinear algebraic equations. The nonlinearities complicated both the analysis and the computation. A reformulation of the convective wave equation in terms of a new set of dependent variables is developed with a special emphasis on its suitability for numerical solutions on fast computers. The technique is very attractive because the resulting equations are linear in nonwaving variables. The computer solution to such a linear system of algebraic equations may be obtained by well-defined and direct means which are conservative of computer time and storage space. Typical examples are illustrated and computational results are compared with available numerical and experimental data.

  2. Order Selection for General Expression of Nonlinear Autoregressive Model Based on Multivariate Stepwise Regression

    NASA Astrophysics Data System (ADS)

    Shi, Jinfei; Zhu, Songqing; Chen, Ruwen

    2017-12-01

    An order selection method based on multiple stepwise regressions is proposed for General Expression of Nonlinear Autoregressive model which converts the model order problem into the variable selection of multiple linear regression equation. The partial autocorrelation function is adopted to define the linear term in GNAR model. The result is set as the initial model, and then the nonlinear terms are introduced gradually. Statistics are chosen to study the improvements of both the new introduced and originally existed variables for the model characteristics, which are adopted to determine the model variables to retain or eliminate. So the optimal model is obtained through data fitting effect measurement or significance test. The simulation and classic time-series data experiment results show that the method proposed is simple, reliable and can be applied to practical engineering.

  3. FSILP: fuzzy-stochastic-interval linear programming for supporting municipal solid waste management.

    PubMed

    Li, Pu; Chen, Bing

    2011-04-01

    Although many studies on municipal solid waste management (MSW management) were conducted under uncertain conditions of fuzzy, stochastic, and interval coexistence, the solution to the conventional linear programming problems of integrating fuzzy method with the other two was inefficient. In this study, a fuzzy-stochastic-interval linear programming (FSILP) method is developed by integrating Nguyen's method with conventional linear programming for supporting municipal solid waste management. The Nguyen's method was used to convert the fuzzy and fuzzy-stochastic linear programming problems into the conventional linear programs, by measuring the attainment values of fuzzy numbers and/or fuzzy random variables, as well as superiority and inferiority between triangular fuzzy numbers/triangular fuzzy-stochastic variables. The developed method can effectively tackle uncertainties described in terms of probability density functions, fuzzy membership functions, and discrete intervals. Moreover, the method can also improve upon the conventional interval fuzzy programming and two-stage stochastic programming approaches, with advantageous capabilities that are easily achieved with fewer constraints and significantly reduces consumption time. The developed model was applied to a case study of municipal solid waste management system in a city. The results indicated that reasonable solutions had been generated. The solution can help quantify the relationship between the change of system cost and the uncertainties, which could support further analysis of tradeoffs between the waste management cost and the system failure risk. Copyright © 2010 Elsevier Ltd. All rights reserved.

  4. Sensitivity to mental effort and test-retest reliability of heart rate variability measures in healthy seniors.

    PubMed

    Mukherjee, Shalini; Yadav, Rajeev; Yung, Iris; Zajdel, Daniel P; Oken, Barry S

    2011-10-01

    To determine (1) whether heart rate variability (HRV) was a sensitive and reliable measure in mental effort tasks carried out by healthy seniors and (2) whether non-linear approaches to HRV analysis, in addition to traditional time and frequency domain approaches were useful to study such effects. Forty healthy seniors performed two visual working memory tasks requiring different levels of mental effort, while ECG was recorded. They underwent the same tasks and recordings 2 weeks later. Traditional and 13 non-linear indices of HRV including Poincaré, entropy and detrended fluctuation analysis (DFA) were determined. Time domain, especially mean R-R interval (RRI), frequency domain and, among non-linear parameters - Poincaré and DFA were the most reliable indices. Mean RRI, time domain and Poincaré were also the most sensitive to different mental effort task loads and had the largest effect size. Overall, linear measures were the most sensitive and reliable indices to mental effort. In non-linear measures, Poincaré was the most reliable and sensitive, suggesting possible usefulness as an independent marker in cognitive function tasks in healthy seniors. A large number of HRV parameters was both reliable as well as sensitive indices of mental effort, although the simple linear methods were the most sensitive. Copyright © 2011 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  5. Semiparametric methods for estimation of a nonlinear exposure-outcome relationship using instrumental variables with application to Mendelian randomization.

    PubMed

    Staley, James R; Burgess, Stephen

    2017-05-01

    Mendelian randomization, the use of genetic variants as instrumental variables (IV), can test for and estimate the causal effect of an exposure on an outcome. Most IV methods assume that the function relating the exposure to the expected value of the outcome (the exposure-outcome relationship) is linear. However, in practice, this assumption may not hold. Indeed, often the primary question of interest is to assess the shape of this relationship. We present two novel IV methods for investigating the shape of the exposure-outcome relationship: a fractional polynomial method and a piecewise linear method. We divide the population into strata using the exposure distribution, and estimate a causal effect, referred to as a localized average causal effect (LACE), in each stratum of population. The fractional polynomial method performs metaregression on these LACE estimates. The piecewise linear method estimates a continuous piecewise linear function, the gradient of which is the LACE estimate in each stratum. Both methods were demonstrated in a simulation study to estimate the true exposure-outcome relationship well, particularly when the relationship was a fractional polynomial (for the fractional polynomial method) or was piecewise linear (for the piecewise linear method). The methods were used to investigate the shape of relationship of body mass index with systolic blood pressure and diastolic blood pressure. © 2017 The Authors Genetic Epidemiology Published by Wiley Periodicals, Inc.

  6. Semiparametric methods for estimation of a nonlinear exposure‐outcome relationship using instrumental variables with application to Mendelian randomization

    PubMed Central

    Staley, James R.

    2017-01-01

    ABSTRACT Mendelian randomization, the use of genetic variants as instrumental variables (IV), can test for and estimate the causal effect of an exposure on an outcome. Most IV methods assume that the function relating the exposure to the expected value of the outcome (the exposure‐outcome relationship) is linear. However, in practice, this assumption may not hold. Indeed, often the primary question of interest is to assess the shape of this relationship. We present two novel IV methods for investigating the shape of the exposure‐outcome relationship: a fractional polynomial method and a piecewise linear method. We divide the population into strata using the exposure distribution, and estimate a causal effect, referred to as a localized average causal effect (LACE), in each stratum of population. The fractional polynomial method performs metaregression on these LACE estimates. The piecewise linear method estimates a continuous piecewise linear function, the gradient of which is the LACE estimate in each stratum. Both methods were demonstrated in a simulation study to estimate the true exposure‐outcome relationship well, particularly when the relationship was a fractional polynomial (for the fractional polynomial method) or was piecewise linear (for the piecewise linear method). The methods were used to investigate the shape of relationship of body mass index with systolic blood pressure and diastolic blood pressure. PMID:28317167

  7. Growth and yield in Eucalyptus globulus

    Treesearch

    James A. Rinehart; Richard B. Standiford

    1983-01-01

    A study of the major Eucalyptus globulus stands throughout California conducted by Woodbridge Metcalf in 1924 provides a complete and accurate data set for generating variable site-density yield models. Two models were developed using linear regression techniques. Model I depicts a linear relationship between age and yield best used for stands between five and fifteen...

  8. Statistical Methodology for the Analysis of Repeated Duration Data in Behavioral Studies

    ERIC Educational Resources Information Center

    Letué, Frédérique; Martinez, Marie-José; Samson, Adeline; Vilain, Anne; Vilain, Coriandre

    2018-01-01

    Purpose: Repeated duration data are frequently used in behavioral studies. Classical linear or log-linear mixed models are often inadequate to analyze such data, because they usually consist of nonnegative and skew-distributed variables. Therefore, we recommend use of a statistical methodology specific to duration data. Method: We propose a…

  9. Revisiting the Scale-Invariant, Two-Dimensional Linear Regression Method

    ERIC Educational Resources Information Center

    Patzer, A. Beate C.; Bauer, Hans; Chang, Christian; Bolte, Jan; Su¨lzle, Detlev

    2018-01-01

    The scale-invariant way to analyze two-dimensional experimental and theoretical data with statistical errors in both the independent and dependent variables is revisited by using what we call the triangular linear regression method. This is compared to the standard least-squares fit approach by applying it to typical simple sets of example data…

  10. An Authentic Task That Models Quadratics

    ERIC Educational Resources Information Center

    Baron, Lorraine M.

    2015-01-01

    As students develop algebraic reasoning in grades 5 to 9, they learn to recognize patterns and understand expressions, equations, and variables. Linear functions are a focus in eighth-grade mathematics, and by algebra 1, students must make sense of functions that are not linear. This article describes how students worked through a classroom task…

  11. On the null distribution of Bayes factors in linear regression

    USDA-ARS?s Scientific Manuscript database

    We show that under the null, the 2 log (Bayes factor) is asymptotically distributed as a weighted sum of chi-squared random variables with a shifted mean. This claim holds for Bayesian multi-linear regression with a family of conjugate priors, namely, the normal-inverse-gamma prior, the g-prior, and...

  12. On the Relation between the Linear Factor Model and the Latent Profile Model

    ERIC Educational Resources Information Center

    Halpin, Peter F.; Dolan, Conor V.; Grasman, Raoul P. P. P.; De Boeck, Paul

    2011-01-01

    The relationship between linear factor models and latent profile models is addressed within the context of maximum likelihood estimation based on the joint distribution of the manifest variables. Although the two models are well known to imply equivalent covariance decompositions, in general they do not yield equivalent estimates of the…

  13. A Linear Variable-[theta] Model for Measuring Individual Differences in Response Precision

    ERIC Educational Resources Information Center

    Ferrando, Pere J.

    2011-01-01

    Models for measuring individual response precision have been proposed for binary and graded responses. However, more continuous formats are quite common in personality measurement and are usually analyzed with the linear factor analysis model. This study extends the general Gaussian person-fluctuation model to the continuous-response case and…

  14. Robust linear discriminant analysis with distance based estimators

    NASA Astrophysics Data System (ADS)

    Lim, Yai-Fung; Yahaya, Sharipah Soaad Syed; Ali, Hazlina

    2017-11-01

    Linear discriminant analysis (LDA) is one of the supervised classification techniques concerning relationship between a categorical variable and a set of continuous variables. The main objective of LDA is to create a function to distinguish between populations and allocating future observations to previously defined populations. Under the assumptions of normality and homoscedasticity, the LDA yields optimal linear discriminant rule (LDR) between two or more groups. However, the optimality of LDA highly relies on the sample mean and pooled sample covariance matrix which are known to be sensitive to outliers. To alleviate these conflicts, a new robust LDA using distance based estimators known as minimum variance vector (MVV) has been proposed in this study. The MVV estimators were used to substitute the classical sample mean and classical sample covariance to form a robust linear discriminant rule (RLDR). Simulation and real data study were conducted to examine on the performance of the proposed RLDR measured in terms of misclassification error rates. The computational result showed that the proposed RLDR is better than the classical LDR and was comparable with the existing robust LDR.

  15. Local energy decay for linear wave equations with variable coefficients

    NASA Astrophysics Data System (ADS)

    Ikehata, Ryo

    2005-06-01

    A uniform local energy decay result is derived to the linear wave equation with spatial variable coefficients. We deal with this equation in an exterior domain with a star-shaped complement. Our advantage is that we do not assume any compactness of the support on the initial data, and its proof is quite simple. This generalizes a previous famous result due to Morawetz [The decay of solutions of the exterior initial-boundary value problem for the wave equation, Comm. Pure Appl. Math. 14 (1961) 561-568]. In order to prove local energy decay, we mainly apply two types of ideas due to Ikehata-Matsuyama [L2-behaviour of solutions to the linear heat and wave equations in exterior domains, Sci. Math. Japon. 55 (2002) 33-42] and Todorova-Yordanov [Critical exponent for a nonlinear wave equation with damping, J. Differential Equations 174 (2001) 464-489].

  16. Light propagation in linearly perturbed ΛLTB models

    NASA Astrophysics Data System (ADS)

    Meyer, Sven; Bartelmann, Matthias

    2017-11-01

    We apply a generic formalism of light propagation to linearly perturbed spherically symmetric dust models including a cosmological constant. For a comoving observer on the central worldline, we derive the equation of geodesic deviation and perform a suitable spherical harmonic decomposition. This allows to map the abstract gauge-invariant perturbation variables to well-known quantities from weak gravitational lensing like convergence or cosmic shear. The resulting set of differential equations can effectively be solved by a Green's function approach leading to line-of-sight integrals sourced by the perturbation variables on the backward lightcone. The resulting spherical harmonic coefficients of the lensing observables are presented and the shear field is decomposed into its E- and B-modes. Results of this work are an essential tool to add information from linear structure formation to the analysis of spherically symmetric dust models with the purpose of testing the Copernican Principle with multiple cosmological probes.

  17. Analysis of a Linear System for Variable-Thrust Control in the Terminal Phase of Rendezvous

    NASA Technical Reports Server (NTRS)

    Hord, Richard A.; Durling, Barbara J.

    1961-01-01

    A linear system for applying thrust to a ferry vehicle in the 3 terminal phase of rendezvous with a satellite is analyzed. This system requires that the ferry thrust vector per unit mass be variable and equal to a suitable linear combination of the measured position and velocity vectors of the ferry relative to the satellite. The variations of the ferry position, speed, acceleration, and mass ratio are examined for several combinations of the initial conditions and two basic control parameters analogous to the undamped natural frequency and the fraction of critical damping. Upon making a desirable selection of one control parameter and requiring minimum fuel expenditure for given terminal-phase initial conditions, a simplified analysis in one dimension practically fixes the choice of the remaining control parameter. The system can be implemented by an automatic controller or by a pilot.

  18. Empirical and Theoretical Aspects of Generation and Transfer of Information in a Neuromagnetic Source Network

    PubMed Central

    Vakorin, Vasily A.; Mišić, Bratislav; Krakovska, Olga; McIntosh, Anthony Randal

    2011-01-01

    Variability in source dynamics across the sources in an activated network may be indicative of how the information is processed within a network. Information-theoretic tools allow one not only to characterize local brain dynamics but also to describe interactions between distributed brain activity. This study follows such a framework and explores the relations between signal variability and asymmetry in mutual interdependencies in a data-driven pipeline of non-linear analysis of neuromagnetic sources reconstructed from human magnetoencephalographic (MEG) data collected as a reaction to a face recognition task. Asymmetry in non-linear interdependencies in the network was analyzed using transfer entropy, which quantifies predictive information transfer between the sources. Variability of the source activity was estimated using multi-scale entropy, quantifying the rate of which information is generated. The empirical results are supported by an analysis of synthetic data based on the dynamics of coupled systems with time delay in coupling. We found that the amount of information transferred from one source to another was correlated with the difference in variability between the dynamics of these two sources, with the directionality of net information transfer depending on the time scale at which the sample entropy was computed. The results based on synthetic data suggest that both time delay and strength of coupling can contribute to the relations between variability of brain signals and information transfer between them. Our findings support the previous attempts to characterize functional organization of the activated brain, based on a combination of non-linear dynamics and temporal features of brain connectivity, such as time delay. PMID:22131968

  19. LCFIPlus: A framework for jet analysis in linear collider studies

    NASA Astrophysics Data System (ADS)

    Suehara, Taikan; Tanabe, Tomohiko

    2016-02-01

    We report on the progress in flavor identification tools developed for a future e+e- linear collider such as the International Linear Collider (ILC) and Compact Linear Collider (CLIC). Building on the work carried out by the LCFIVertex collaboration, we employ new strategies in vertex finding and jet finding, and introduce new discriminating variables for jet flavor identification. We present the performance of the new algorithms in the conditions simulated using a detector concept designed for the ILC. The algorithms have been successfully used in ILC physics simulation studies, such as those presented in the ILC Technical Design Report.

  20. Functional Relationships and Regression Analysis.

    ERIC Educational Resources Information Center

    Preece, Peter F. W.

    1978-01-01

    Using a degenerate multivariate normal model for the distribution of organismic variables, the form of least-squares regression analysis required to estimate a linear functional relationship between variables is derived. It is suggested that the two conventional regression lines may be considered to describe functional, not merely statistical,…

  1. Data Combination and Instrumental Variables in Linear Models

    ERIC Educational Resources Information Center

    Khawand, Christopher

    2012-01-01

    Instrumental variables (IV) methods allow for consistent estimation of causal effects, but suffer from poor finite-sample properties and data availability constraints. IV estimates also tend to have relatively large standard errors, often inhibiting the interpretability of differences between IV and non-IV point estimates. Lastly, instrumental…

  2. Estimating integrated variance in the presence of microstructure noise using linear regression

    NASA Astrophysics Data System (ADS)

    Holý, Vladimír

    2017-07-01

    Using financial high-frequency data for estimation of integrated variance of asset prices is beneficial but with increasing number of observations so-called microstructure noise occurs. This noise can significantly bias the realized variance estimator. We propose a method for estimation of the integrated variance robust to microstructure noise as well as for testing the presence of the noise. Our method utilizes linear regression in which realized variances estimated from different data subsamples act as dependent variable while the number of observations act as explanatory variable. We compare proposed estimator with other methods on simulated data for several microstructure noise structures.

  3. Nonlinear effects in a plain journal bearing. I - Analytical study. II - Results

    NASA Technical Reports Server (NTRS)

    Choy, F. K.; Braun, M. J.; Hu, Y.

    1991-01-01

    In the first part of this work, a numerical model is presented which couples the variable-property Reynolds equation with a rotor-dynamics model for the calculation of a plain journal bearing's nonlinear characteristics when working with a cryogenic fluid, LOX. The effects of load on the linear/nonlinear plain journal bearing characteristics are analyzed and presented in a parametric form. The second part of this work presents numerical results obtained for specific parametric-study input variables (lubricant inlet temperature, external load, angular rotational speed, and axial misalignment). Attention is given to the interrelations between pressure profiles and bearing linear and nonlinear characteristics.

  4. Detecting multiple outliers in linear functional relationship model for circular variables using clustering technique

    NASA Astrophysics Data System (ADS)

    Mokhtar, Nurkhairany Amyra; Zubairi, Yong Zulina; Hussin, Abdul Ghapor

    2017-05-01

    Outlier detection has been used extensively in data analysis to detect anomalous observation in data and has important application in fraud detection and robust analysis. In this paper, we propose a method in detecting multiple outliers for circular variables in linear functional relationship model. Using the residual values of the Caires and Wyatt model, we applied the hierarchical clustering procedure. With the use of tree diagram, we illustrate the graphical approach of the detection of outlier. A simulation study is done to verify the accuracy of the proposed method. Also, an illustration to a real data set is given to show its practical applicability.

  5. A generalized linear integrate-and-fire neural model produces diverse spiking behaviors.

    PubMed

    Mihalaş, Stefan; Niebur, Ernst

    2009-03-01

    For simulations of neural networks, there is a trade-off between the size of the network that can be simulated and the complexity of the model used for individual neurons. In this study, we describe a generalization of the leaky integrate-and-fire model that produces a wide variety of spiking behaviors while still being analytically solvable between firings. For different parameter values, the model produces spiking or bursting, tonic, phasic or adapting responses, depolarizing or hyperpolarizing after potentials and so forth. The model consists of a diagonalizable set of linear differential equations describing the time evolution of membrane potential, a variable threshold, and an arbitrary number of firing-induced currents. Each of these variables is modified by an update rule when the potential reaches threshold. The variables used are intuitive and have biological significance. The model's rich behavior does not come from the differential equations, which are linear, but rather from complex update rules. This single-neuron model can be implemented using algorithms similar to the standard integrate-and-fire model. It is a natural match with event-driven algorithms for which the firing times are obtained as a solution of a polynomial equation.

  6. RB Particle Filter Time Synchronization Algorithm Based on the DPM Model.

    PubMed

    Guo, Chunsheng; Shen, Jia; Sun, Yao; Ying, Na

    2015-09-03

    Time synchronization is essential for node localization, target tracking, data fusion, and various other Wireless Sensor Network (WSN) applications. To improve the estimation accuracy of continuous clock offset and skew of mobile nodes in WSNs, we propose a novel time synchronization algorithm, the Rao-Blackwellised (RB) particle filter time synchronization algorithm based on the Dirichlet process mixture (DPM) model. In a state-space equation with a linear substructure, state variables are divided into linear and non-linear variables by the RB particle filter algorithm. These two variables can be estimated using Kalman filter and particle filter, respectively, which improves the computational efficiency more so than if only the particle filter was used. In addition, the DPM model is used to describe the distribution of non-deterministic delays and to automatically adjust the number of Gaussian mixture model components based on the observational data. This improves the estimation accuracy of clock offset and skew, which allows achieving the time synchronization. The time synchronization performance of this algorithm is also validated by computer simulations and experimental measurements. The results show that the proposed algorithm has a higher time synchronization precision than traditional time synchronization algorithms.

  7. Incorporating nonlinearity into mediation analyses.

    PubMed

    Knafl, George J; Knafl, Kathleen A; Grey, Margaret; Dixon, Jane; Deatrick, Janet A; Gallo, Agatha M

    2017-03-21

    Mediation is an important issue considered in the behavioral, medical, and social sciences. It addresses situations where the effect of a predictor variable X on an outcome variable Y is explained to some extent by an intervening, mediator variable M. Methods for addressing mediation have been available for some time. While these methods continue to undergo refinement, the relationships underlying mediation are commonly treated as linear in the outcome Y, the predictor X, and the mediator M. These relationships, however, can be nonlinear. Methods are needed for assessing when mediation relationships can be treated as linear and for estimating them when they are nonlinear. Existing adaptive regression methods based on fractional polynomials are extended here to address nonlinearity in mediation relationships, but assuming those relationships are monotonic as would be consistent with theories about directionality of such relationships. Example monotonic mediation analyses are provided assessing linear and monotonic mediation of the effect of family functioning (X) on a child's adaptation (Y) to a chronic condition by the difficulty (M) for the family in managing the child's condition. Example moderated monotonic mediation and simulation analyses are also presented. Adaptive methods provide an effective way to incorporate possibly nonlinear monotonicity into mediation relationships.

  8. Multiple regression technique for Pth degree polynominals with and without linear cross products

    NASA Technical Reports Server (NTRS)

    Davis, J. W.

    1973-01-01

    A multiple regression technique was developed by which the nonlinear behavior of specified independent variables can be related to a given dependent variable. The polynomial expression can be of Pth degree and can incorporate N independent variables. Two cases are treated such that mathematical models can be studied both with and without linear cross products. The resulting surface fits can be used to summarize trends for a given phenomenon and provide a mathematical relationship for subsequent analysis. To implement this technique, separate computer programs were developed for the case without linear cross products and for the case incorporating such cross products which evaluate the various constants in the model regression equation. In addition, the significance of the estimated regression equation is considered and the standard deviation, the F statistic, the maximum absolute percent error, and the average of the absolute values of the percent of error evaluated. The computer programs and their manner of utilization are described. Sample problems are included to illustrate the use and capability of the technique which show the output formats and typical plots comparing computer results to each set of input data.

  9. A Generalized Linear Integrate-and-Fire Neural Model Produces Diverse Spiking Behaviors

    PubMed Central

    Mihalaş, Ştefan; Niebur, Ernst

    2010-01-01

    For simulations of neural networks, there is a trade-off between the size of the network that can be simulated and the complexity of the model used for individual neurons. In this study, we describe a generalization of the leaky integrate-and-fire model that produces a wide variety of spiking behaviors while still being analytically solvable between firings. For different parameter values, the model produces spiking or bursting, tonic, phasic or adapting responses, depolarizing or hyperpolarizing after potentials and so forth. The model consists of a diagonalizable set of linear differential equations describing the time evolution of membrane potential, a variable threshold, and an arbitrary number of firing-induced currents. Each of these variables is modified by an update rule when the potential reaches threshold. The variables used are intuitive and have biological significance. The model’s rich behavior does not come from the differential equations, which are linear, but rather from complex update rules. This single-neuron model can be implemented using algorithms similar to the standard integrate-and-fire model. It is a natural match with event-driven algorithms for which the firing times are obtained as a solution of a polynomial equation. PMID:18928368

  10. Morphometric variability of Arctodiaptomus salinus (Copepoda) in the Mediterranean-Black Sea region.

    PubMed

    Anufriieva, Elena V; Shadrin, Nickolai V

    2015-11-18

    Inter-species variability in morphological traits creates a need to know the range of variability of characteristics in the species for taxonomic and ecological tasks. Copepoda Arctodiaptomus salinus, which inhabits water bodies across Eurasia and North Africa, plays a dominant role in plankton of different water bodies-from fresh to hypersaline. This work assesses the intra- and inter-population morphometric variability of A. salinus in the Mediterranean-Black Sea region and discusses some observed regularities. The variability of linear body parameters and proportions was studied. The impacts of salinity, temperature, and population density on morphological characteristics and their variability can manifest themselves in different ways at the intra- and inter-population levels. A significant effect of salinity, pH and temperature on the body proportions was not found. Their intra-population variability is dependent on temperature and salinity. Sexual dimorphism of A. salinus manifests in different linear parameters, proportions, and their variability. There were no effects of temperature, pH and salinity on the female/male parameter ratio. There were significant differences in the body proportions of males and females in different populations. The influence of temperature, salinity, and population density can be attributed to 80%-90% of intra-population variability of A. salinus. However, these factors can explain less than 40% of inter-population differences. Significant differences in the body proportions of males and females from different populations may suggest that some local populations of A. salinus in the Mediterranean-Black Sea region are in the initial stages of differentiation.

  11. A non-parametric postprocessor for bias-correcting multi-model ensemble forecasts of hydrometeorological and hydrologic variables

    NASA Astrophysics Data System (ADS)

    Brown, James; Seo, Dong-Jun

    2010-05-01

    Operational forecasts of hydrometeorological and hydrologic variables often contain large uncertainties, for which ensemble techniques are increasingly used. However, the utility of ensemble forecasts depends on the unbiasedness of the forecast probabilities. We describe a technique for quantifying and removing biases from ensemble forecasts of hydrometeorological and hydrologic variables, intended for use in operational forecasting. The technique makes no a priori assumptions about the distributional form of the variables, which is often unknown or difficult to model parametrically. The aim is to estimate the conditional cumulative distribution function (ccdf) of the observed variable given a (possibly biased) real-time ensemble forecast from one or several forecasting systems (multi-model ensembles). The technique is based on Bayesian optimal linear estimation of indicator variables, and is analogous to indicator cokriging (ICK) in geostatistics. By developing linear estimators for the conditional expectation of the observed variable at many thresholds, ICK provides a discrete approximation of the full ccdf. Since ICK minimizes the conditional error variance of the indicator expectation at each threshold, it effectively minimizes the Continuous Ranked Probability Score (CRPS) when infinitely many thresholds are employed. However, the ensemble members used as predictors in ICK, and other bias-correction techniques, are often highly cross-correlated, both within and between models. Thus, we propose an orthogonal transform of the predictors used in ICK, which is analogous to using their principal components in the linear system of equations. This leads to a well-posed problem in which a minimum number of predictors are used to provide maximum information content in terms of the total variance explained. The technique is used to bias-correct precipitation ensemble forecasts from the NCEP Global Ensemble Forecast System (GEFS), for which independent validation results are presented. Extension to multimodel ensembles from the NCEP GFS and Short Range Ensemble Forecast (SREF) systems is also proposed.

  12. Warping of a computerized 3-D atlas to match brain image volumes for quantitative neuroanatomical and functional analysis

    NASA Astrophysics Data System (ADS)

    Evans, Alan C.; Dai, Weiqian; Collins, D. Louis; Neelin, Peter; Marrett, Sean

    1991-06-01

    We describe the implementation, experience and preliminary results obtained with a 3-D computerized brain atlas for topographical and functional analysis of brain sub-regions. A volume-of-interest (VOI) atlas was produced by manual contouring on 64 adjacent 2 mm-thick MRI slices to yield 60 brain structures in each hemisphere which could be adjusted, originally by global affine transformation or local interactive adjustments, to match individual MRI datasets. We have now added a non-linear deformation (warp) capability (Bookstein, 1989) into the procedure for fitting the atlas to the brain data. Specific target points are identified in both atlas and MRI spaces which define a continuous 3-D warp transformation that maps the atlas on to the individual brain image. The procedure was used to fit MRI brain image volumes from 16 young normal volunteers. Regional volume and positional variability were determined, the latter in such a way as to assess the extent to which previous linear models of brain anatomical variability fail to account for the true variation among normal individuals. Using a linear model for atlas deformation yielded 3-D fits of the MRI data which, when pooled across subjects and brain regions, left a residual mis-match of 6 - 7 mm as compared to the non-linear model. The results indicate a substantial component of morphometric variability is not accounted for by linear scaling. This has profound implications for applications which employ stereotactic coordinate systems which map individual brains into a common reference frame: quantitative neuroradiology, stereotactic neurosurgery and cognitive mapping of normal brain function with PET. In the latter case, the combination of a non-linear deformation algorithm would allow for accurate measurement of individual anatomic variations and the inclusion of such variations in inter-subject averaging methodologies used for cognitive mapping with PET.

  13. Engineering Overview of a Multidisciplinary HSCT Design Framework Using Medium-Fidelity Analysis Codes

    NASA Technical Reports Server (NTRS)

    Weston, R. P.; Green, L. L.; Salas, A. O.; Samareh, J. A.; Townsend, J. C.; Walsh, J. L.

    1999-01-01

    An objective of the HPCC Program at NASA Langley has been to promote the use of advanced computing techniques to more rapidly solve the problem of multidisciplinary optimization of a supersonic transport configuration. As a result, a software system has been designed and is being implemented to integrate a set of existing discipline analysis codes, some of them CPU-intensive, into a distributed computational framework for the design of a High Speed Civil Transport (HSCT) configuration. The proposed paper will describe the engineering aspects of integrating these analysis codes and additional interface codes into an automated design system. The objective of the design problem is to optimize the aircraft weight for given mission conditions, range, and payload requirements, subject to aerodynamic, structural, and performance constraints. The design variables include both thicknesses of structural elements and geometric parameters that define the external aircraft shape. An optimization model has been adopted that uses the multidisciplinary analysis results and the derivatives of the solution with respect to the design variables to formulate a linearized model that provides input to the CONMIN optimization code, which outputs new values for the design variables. The analysis process begins by deriving the updated geometries and grids from the baseline geometries and grids using the new values for the design variables. This free-form deformation approach provides internal FEM (finite element method) grids that are consistent with aerodynamic surface grids. The next step involves using the derived FEM and section properties in a weights process to calculate detailed weights and the center of gravity location for specified flight conditions. The weights process computes the as-built weight, weight distribution, and weight sensitivities for given aircraft configurations at various mass cases. Currently, two mass cases are considered: cruise and gross take-off weight (GTOW). Weights information is obtained from correlations of data from three sources: 1) as-built initial structural and non-structural weights from an existing database, 2) theoretical FEM structural weights and sensitivities from Genesis, and 3) empirical as-built weight increments, non-structural weights, and weight sensitivities from FLOPS. For the aeroelastic analysis, a variable-fidelity aerodynamic analysis has been adopted. This approach uses infrequent CPU-intensive non-linear CFD to calculate a non-linear correction relative to a linear aero calculation for the same aerodynamic surface at an angle of attack that results in the same configuration lift. For efficiency, this nonlinear correction is applied after each subsequent linear aero solution during the iterations between the aerodynamic and structural analyses. Convergence is achieved when the vehicle shape being used for the aerodynamic calculations is consistent with the structural deformations caused by the aerodynamic loads. To make the structural analyses more efficient, a linearized structural deformation model has been adopted, in which a single stiffness matrix can be used to solve for the deformations under all the load conditions. Using the converged aerodynamic loads, a final set of structural analyses are performed to determine the stress distributions and the buckling conditions for constraint calculation. Performance constraints are obtained by running FLOPS using drag polars that are computed using results from non-linear corrections to the linear aero code plus several codes to provide drag increments due to skin friction, wave drag, and other miscellaneous drag contributions. The status of the integration effort will be presented in the proposed paper, and results will be provided that illustrate the degree of accuracy in the linearizations that have been employed.

  14. Sensitivity to Mental Effort and Test-Retest Reliability of Heart Rate Variability Measures in Healthy Seniors

    PubMed Central

    Mukherjee, Shalini; Yadav, Rajeev; Yung, Iris; Zajdel, Daniel P.; Oken, Barry S.

    2011-01-01

    Objectives To determine 1) whether heart rate variability (HRV) was a sensitive and reliable measure in mental effort tasks carried out by healthy seniors and 2) whether non-linear approaches to HRV analysis, in addition to traditional time and frequency domain approaches were useful to study such effects. Methods Forty healthy seniors performed two visual working memory tasks requiring different levels of mental effort, while ECG was recorded. They underwent the same tasks and recordings two weeks later. Traditional and 13 non-linear indices of HRV including Poincaré, entropy and detrended fluctuation analysis (DFA) were determined. Results Time domain (especially mean R-R interval/RRI), frequency domain and, among nonlinear parameters- Poincaré and DFA were the most reliable indices. Mean RRI, time domain and Poincaré were also the most sensitive to different mental effort task loads and had the largest effect size. Conclusions Overall, linear measures were the most sensitive and reliable indices to mental effort. In non-linear measures, Poincaré was the most reliable and sensitive, suggesting possible usefulness as an independent marker in cognitive function tasks in healthy seniors. Significance A large number of HRV parameters was both reliable as well as sensitive indices of mental effort, although the simple linear methods were the most sensitive. PMID:21459665

  15. [Prediction model of health workforce and beds in county hospitals of Hunan by multiple linear regression].

    PubMed

    Ling, Ru; Liu, Jiawang

    2011-12-01

    To construct prediction model for health workforce and hospital beds in county hospitals of Hunan by multiple linear regression. We surveyed 16 counties in Hunan with stratified random sampling according to uniform questionnaires,and multiple linear regression analysis with 20 quotas selected by literature view was done. Independent variables in the multiple linear regression model on medical personnels in county hospitals included the counties' urban residents' income, crude death rate, medical beds, business occupancy, professional equipment value, the number of devices valued above 10 000 yuan, fixed assets, long-term debt, medical income, medical expenses, outpatient and emergency visits, hospital visits, actual available bed days, and utilization rate of hospital beds. Independent variables in the multiple linear regression model on county hospital beds included the the population of aged 65 and above in the counties, disposable income of urban residents, medical personnel of medical institutions in county area, business occupancy, the total value of professional equipment, fixed assets, long-term debt, medical income, medical expenses, outpatient and emergency visits, hospital visits, actual available bed days, utilization rate of hospital beds, and length of hospitalization. The prediction model shows good explanatory and fitting, and may be used for short- and mid-term forecasting.

  16. A program for identification of linear systems

    NASA Technical Reports Server (NTRS)

    Buell, J.; Kalaba, R.; Ruspini, E.; Yakush, A.

    1971-01-01

    A program has been written for the identification of parameters in certain linear systems. These systems appear in biomedical problems, particularly in compartmental models of pharmacokinetics. The method presented here assumes that some of the state variables are regularly modified by jump conditions. This simulates administration of drugs following some prescribed drug regime. Parameters are identified by a least-square fit of the linear differential system to a set of experimental observations. The method is especially suited when the interval of observation of the system is very long.

  17. Reduced-Size Integer Linear Programming Models for String Selection Problems: Application to the Farthest String Problem.

    PubMed

    Zörnig, Peter

    2015-08-01

    We present integer programming models for some variants of the farthest string problem. The number of variables and constraints is substantially less than that of the integer linear programming models known in the literature. Moreover, the solution of the linear programming-relaxation contains only a small proportion of noninteger values, which considerably simplifies the rounding process. Numerical tests have shown excellent results, especially when a small set of long sequences is given.

  18. An Exploratory Study of Resting Cardiac Rate and Variability from the Last Trimester of Prenatal Life Through the First Year of Postnatal Life

    ERIC Educational Resources Information Center

    Lewis, Michael; And Others

    1970-01-01

    The data indicate no relationship between maternal and fetal data. Moreover, there are clear developmental patterns of resting cardiac response over the first year of life, with rate and variability showing linear decreases. (Author/WY)

  19. Maximum Likelihood Estimation of Nonlinear Structural Equation Models with Ignorable Missing Data

    ERIC Educational Resources Information Center

    Lee, Sik-Yum; Song, Xin-Yuan; Lee, John C. K.

    2003-01-01

    The existing maximum likelihood theory and its computer software in structural equation modeling are established on the basis of linear relationships among latent variables with fully observed data. However, in social and behavioral sciences, nonlinear relationships among the latent variables are important for establishing more meaningful models…

  20. RELATION OF ENVIRONMENTAL CHARACTERISTICS TO FISH ASSEMBLAGES IN THE UPPER FRENCH BROAD RIVER BASIN, NORTH CAROLINA

    EPA Science Inventory

    Fish assemblages at 16 sites in the upper French Broad River basin, North Carolina were related to environmental variables using detrended correspondence analysis (DCA) and linear regression. This study was conducted at the landscape scale because regional variables are controlle...

  1. Centering Effects in HLM Level-1 Predictor Variables.

    ERIC Educational Resources Information Center

    Schumacker, Randall E.; Bembry, Karen

    Research has suggested that important research questions can be addressed with meaningful interpretations using hierarchical linear modeling (HLM). The proper interpretation of results, however, is invariably linked to the choice of centering for the Level-1 predictor variables that produce the outcome measure for the Level-2 regression analysis.…

  2. Segmented Polynomial Models in Quasi-Experimental Research.

    ERIC Educational Resources Information Center

    Wasik, John L.

    1981-01-01

    The use of segmented polynomial models is explained. Examples of design matrices of dummy variables are given for the least squares analyses of time series and discontinuity quasi-experimental research designs. Linear combinations of dummy variable vectors appear to provide tests of effects in the two quasi-experimental designs. (Author/BW)

  3. Measuring the Readability of Elementary Algebra Using the Cloze Technique.

    ERIC Educational Resources Information Center

    Kulm, Gerald

    The relationship to readability of ten variables characterizing structural properties of mathematical prose was investigated in elementary algebra textbooks. Readability was measured by algebra student's responses to two forms of cloze tests. Linear and currilinear correlations were calculated between each structural variable and the cloze test.…

  4. Obstetric and Parental Psychiatric Variables as Potential Predictors of Autism Severity

    ERIC Educational Resources Information Center

    Wallace, Anna E.; Anderson, George M.; Dubrow, Robert

    2008-01-01

    Associations between obstetric and parental psychiatric variables and subjects' Autism Diagnostic Interview-Revised (ADI-R) and Autism Diagnostic Observation Schedule (ADOS) domain scores were examined using linear mixed effects models. Data for the 228 families studied were provided by the Autism Genetic Resource Exchange. Hypertension (P =…

  5. Thinking Visually about Algebra

    ERIC Educational Resources Information Center

    Baroudi, Ziad

    2015-01-01

    Many introductions to algebra in high school begin with teaching students to generalise linear numerical patterns. This article argues that this approach needs to be changed so that students encounter variables in the context of modelling visual patterns so that the variables have a meaning. The article presents sample classroom activities,…

  6. Diagnostic Procedures for Detecting Nonlinear Relationships between Latent Variables

    ERIC Educational Resources Information Center

    Bauer, Daniel J.; Baldasaro, Ruth E.; Gottfredson, Nisha C.

    2012-01-01

    Structural equation models are commonly used to estimate relationships between latent variables. Almost universally, the fitted models specify that these relationships are linear in form. This assumption is rarely checked empirically, largely for lack of appropriate diagnostic techniques. This article presents and evaluates two procedures that can…

  7. Reliability of the Load-Velocity Relationship Obtained Through Linear and Polynomial Regression Models to Predict the One-Repetition Maximum Load.

    PubMed

    Pestaña-Melero, Francisco Luis; Haff, G Gregory; Rojas, Francisco Javier; Pérez-Castilla, Alejandro; García-Ramos, Amador

    2017-12-18

    This study aimed to compare the between-session reliability of the load-velocity relationship between (1) linear vs. polynomial regression models, (2) concentric-only vs. eccentric-concentric bench press variants, as well as (3) the within-participants vs. the between-participants variability of the velocity attained at each percentage of the one-repetition maximum (%1RM). The load-velocity relationship of 30 men (age: 21.2±3.8 y; height: 1.78±0.07 m, body mass: 72.3±7.3 kg; bench press 1RM: 78.8±13.2 kg) were evaluated by means of linear and polynomial regression models in the concentric-only and eccentric-concentric bench press variants in a Smith Machine. Two sessions were performed with each bench press variant. The main findings were: (1) first-order-polynomials (CV: 4.39%-4.70%) provided the load-velocity relationship with higher reliability than second-order-polynomials (CV: 4.68%-5.04%); (2) the reliability of the load-velocity relationship did not differ between the concentric-only and eccentric-concentric bench press variants; (3) the within-participants variability of the velocity attained at each %1RM was markedly lower than the between-participants variability. Taken together, these results highlight that, regardless of the bench press variant considered, the individual determination of the load-velocity relationship by a linear regression model could be recommended to monitor and prescribe the relative load in the Smith machine bench press exercise.

  8. Study of Heart Rate Variability in Bipolar Disorder: Linear and Non-Linear Parameters during Sleep

    PubMed Central

    Migliorini, Matteo; Mendez, Martin O.; Bianchi, Anna M.

    2012-01-01

    The aim of the study is to define physiological parameters and vital signs that may be related to the mood and mental status in patients affected by bipolar disorder. In particular we explored the autonomic nervous system through the analysis of the heart rate variability. Many different parameters, in the time and in the frequency domain, linear and non-linear were evaluated during the sleep in a group of normal subject and in one patient in four different conditions. The recording of the signals was performed through a wearable sensorized T-shirt. Heart rate variability (HRV) signal and movement analysis allowed also obtaining sleep staging and the estimation of REM sleep percentage over the total sleep time. A group of eight normal females constituted the control group, on which normality ranges were estimated. The pathologic subject was recorded during four different nights, at time intervals of at least 1 week, and during different phases of the disturbance. Some of the examined parameters (MEANNN, SDNN, RMSSD) confirmed reduced HRV in depression and bipolar disorder. REM sleep percentage was found to be increased. Lempel–Ziv complexity and sample entropy, on the other hand, seem to correlate with the depression level. Even if the number of examined subjects is still small, and the results need further validation, the proposed methodology and the calculated parameters seem promising tools for the monitoring of mood changes in psychiatric disorders. PMID:22291638

  9. Design and fabrication of a full-scale actively controlled satellite appendage simulator unit

    NASA Astrophysics Data System (ADS)

    Jacobs, Jack H.; Quenon, Dan; Hadden, Steve; Self, Rick

    1999-07-01

    Modern satellites require the ability to slew and settle quickly in order to acquire or transmit data efficiently. Solar arrays and communication antennas cause low frequency disturbances to the satellite bus during these maneuvers causing undesirable induced vibration of the payload. The ability to develop and experimentally demonstrate attitude control laws which compensate for these flexible body disturbances is of prime importance to modern day satellite manufacturers. Honeywell has designed and fabricated an actively controlled Appendage Simulator Unit (ASU) which can physically induce the modal characteristics of satellite appendages on to a ground based satellite test bed installed on an air bearing. The ASU consists of two orthogonal fulcrum beams weighting over 800 pounds each utilizing two electrodynamic shakers to induce active torques onto the bus. The ASU is programmed with the state space characteristics of the desired appendage and responds in real time to the bus motion to generate realistic disturbances back onto the satellite. Two LVDT's are used on each fulcrum beam to close the loop and insure the system responds in real time the same way a real solar array would on-orbit. Each axis is independently programmable in order to simulate various orientations or modal contributions from an appendage. The design process for the ASU involved the optimization of sensors, actuators, control authority, weight, power and functionality. The smart structure system design process and experimental results are described in detail.

  10. Influence of hot asphalt mixture using asbuton on road composite pavement

    NASA Astrophysics Data System (ADS)

    Gaus, Abdul; Darwis, Muhammad; Imran

    2017-11-01

    Construction and rehabilitation of road infrastructure in Indonesia require about 1.2 million tons of asphalt per year, approximately 100% used of petroleum asphalt. Only a half of asphalt demand can be provided domestically, while about 600 thousand tons have to be imported from abroad. Indonesia has natural asphalt with a quite large deposit but has not been fully utilized. Lack of availability of asphalt and the increasing demand of the domestic market will give effect to an increase in bitumen cost in the domestic market. Somehow, this is not a sufficient condition due to the rising cost of road infrastructure. This study aims to determine the effect of using a layer of asphalt concrete pavement asbuton to rigid pavement (PR-modification). Stressing that occur in rigid pavement, asphalt concrete layer and base course measured by using LVDT and for the subgrade using soil pressure transducer. Using asbuton on asphalt concrete will have more benefit on improving the stability of the marshall. The maximum deflection occurring in the PR-modification at 5.19 mm with a maximum stress of 175.10 kN. Vertical and horizontal tension that occur at the base course at -20 cm row by 0.855 MPa and 0.00282 MPa. Addition of layers of asphalt concrete in rigid pavement using asbuton has increased power by 9.5%.

  11. Analysis of Student and School Level Variables Related to Mathematics Self-Efficacy Level Based on PISA 2012 Results for China-Shanghai, Turkey, and Greece

    ERIC Educational Resources Information Center

    Usta, H. Gonca

    2016-01-01

    This study aims to analyze the student and school level variables that affect students' self-efficacy levels in mathematics in China-Shanghai, Turkey, and Greece based on PISA 2012 results. In line with this purpose, the hierarchical linear regression model (HLM) was employed. The interschool variability is estimated at approximately 17% in…

  12. Experimental design for evaluating WWTP data by linear mass balances.

    PubMed

    Le, Quan H; Verheijen, Peter J T; van Loosdrecht, Mark C M; Volcke, Eveline I P

    2018-05-15

    A stepwise experimental design procedure to obtain reliable data from wastewater treatment plants (WWTPs) was developed. The proposed procedure aims at determining sets of additional measurements (besides available ones) that guarantee the identifiability of key process variables, which means that their value can be calculated from other, measured variables, based on available constraints in the form of linear mass balances. Among all solutions, i.e. all possible sets of additional measurements allowing the identifiability of all key process variables, the optimal solutions were found taking into account two objectives, namely the accuracy of the identified key variables and the cost of additional measurements. The results of this multi-objective optimization problem were represented in a Pareto-optimal front. The presented procedure was applied to a full-scale WWTP. Detailed analysis of the relation between measurements allowed the determination of groups of overlapping mass balances. Adding measured variables could only serve in identifying key variables that appear in the same group of mass balances. Besides, the application of the experimental design procedure to these individual groups significantly reduced the computational effort in evaluating available measurements and planning additional monitoring campaigns. The proposed procedure is straightforward and can be applied to other WWTPs with or without prior data collection. Copyright © 2018 Elsevier Ltd. All rights reserved.

  13. Statistical structure of intrinsic climate variability under global warming

    NASA Astrophysics Data System (ADS)

    Zhu, Xiuhua; Bye, John; Fraedrich, Klaus

    2017-04-01

    Climate variability is often studied in terms of fluctuations with respect to the mean state, whereas the dependence between the mean and variability is rarely discussed. We propose a new climate metric to measure the relationship between means and standard deviations of annual surface temperature computed over non-overlapping 100-year segments. This metric is analyzed based on equilibrium simulations of the Max Planck Institute-Earth System Model (MPI-ESM): the last millennium climate (800-1799), the future climate projection following the A1B scenario (2100-2199), and the 3100-year unforced control simulation. A linear relationship is globally observed in the control simulation and thus termed intrinsic climate variability, which is most pronounced in the tropical region with negative regression slopes over the Pacific warm pool and positive slopes in the eastern tropical Pacific. It relates to asymmetric changes in temperature extremes and associates fluctuating climate means with increase or decrease in intensity and occurrence of both El Niño and La Niña events. In the future scenario period, the linear regression slopes largely retain their spatial structure with appreciable changes in intensity and geographical locations. Since intrinsic climate variability describes the internal rhythm of the climate system, it may serve as guidance for interpreting climate variability and climate change signals in the past and the future.

  14. Decoupling in linear time-varying multivariable systems

    NASA Technical Reports Server (NTRS)

    Sankaran, V.

    1973-01-01

    The necessary and sufficient conditions for the decoupling of an m-input, m-output, linear time varying dynamical system by state variable feedback is described. The class of feedback matrices which decouple the system are illustrated. Systems which do not satisfy these results are described and systems with disturbances are considered. Some examples are illustrated to clarify the results.

  15. A Vernacular for Linear Latent Growth Models

    ERIC Educational Resources Information Center

    Hancock, Gregory R.; Choi, Jaehwa

    2006-01-01

    In its most basic form, latent growth modeling (latent curve analysis) allows an assessment of individuals' change in a measured variable X over time. For simple linear models, as with other growth models, parameter estimates associated with the a construct (amount of X at a chosen temporal reference point) and b construct (growth in X per unit…

  16. A Spreadsheet for a 2 x 3 x 2 Log-Linear Analysis. AIR 1991 Annual Forum Paper.

    ERIC Educational Resources Information Center

    Saupe, Joe L.

    This paper describes a personal computer spreadsheet set up to carry out hierarchical log-linear analyses, a type of analysis useful for institutional research into multidimensional frequency tables formed from categorical variables such as faculty rank, student class level, gender, or retention status. The spreadsheet provides a concrete vehicle…

  17. Factors Influencing M.S.W. Students' Interest in Clinical Practice

    ERIC Educational Resources Information Center

    Perry, Robin

    2009-01-01

    This study utilizes linear and log-linear stochastic models to examine the impact that a variety of variables (including graduate education) have on M.S.W. students' desires to work in clinical practice. Data was collected biannually (between 1992 and 1998) from a complete population sample of all students entering and exiting accredited graduate…

  18. Integrating real-time and manual monitored data to predict hillslope soil moisture dynamics with high spatio-temporal resolution using linear and non-linear models

    USDA-ARS?s Scientific Manuscript database

    Spatio-temporal variability of soil moisture (') is a challenge that remains to be better understood. A trade-off exists between spatial coverage and temporal resolution when using the manual and real-time ' monitoring methods. This restricted the comprehensive and intensive examination of ' dynamic...

  19. Double Linear Damage Rule for Fatigue Analysis

    NASA Technical Reports Server (NTRS)

    Halford, G.; Manson, S.

    1985-01-01

    Double Linear Damage Rule (DLDR) method for use by structural designers to determine fatigue-crack-initiation life when structure subjected to unsteady, variable-amplitude cyclic loadings. Method calculates in advance of service how many loading cycles imposed on structural component before macroscopic crack initiates. Approach eventually used in design of high performance systems and incorporated into design handbooks and codes.

  20. On Rank and Nullity

    ERIC Educational Resources Information Center

    Dobbs, David E.

    2012-01-01

    This note explains how Emil Artin's proof that row rank equals column rank for a matrix with entries in a field leads naturally to the formula for the nullity of a matrix and also to an algorithm for solving any system of linear equations in any number of variables. This material could be used in any course on matrix theory or linear algebra.

  1. Using complexity metrics with R-R intervals and BPM heart rate measures.

    PubMed

    Wallot, Sebastian; Fusaroli, Riccardo; Tylén, Kristian; Jegindø, Else-Marie

    2013-01-01

    Lately, growing attention in the health sciences has been paid to the dynamics of heart rate as indicator of impending failures and for prognoses. Likewise, in social and cognitive sciences, heart rate is increasingly employed as a measure of arousal, emotional engagement and as a marker of interpersonal coordination. However, there is no consensus about which measurements and analytical tools are most appropriate in mapping the temporal dynamics of heart rate and quite different metrics are reported in the literature. As complexity metrics of heart rate variability depend critically on variability of the data, different choices regarding the kind of measures can have a substantial impact on the results. In this article we compare linear and non-linear statistics on two prominent types of heart beat data, beat-to-beat intervals (R-R interval) and beats-per-min (BPM). As a proof-of-concept, we employ a simple rest-exercise-rest task and show that non-linear statistics-fractal (DFA) and recurrence (RQA) analyses-reveal information about heart beat activity above and beyond the simple level of heart rate. Non-linear statistics unveil sustained post-exercise effects on heart rate dynamics, but their power to do so critically depends on the type data that is employed: While R-R intervals are very susceptible to non-linear analyses, the success of non-linear methods for BPM data critically depends on their construction. Generally, "oversampled" BPM time-series can be recommended as they retain most of the information about non-linear aspects of heart beat dynamics.

  2. Using complexity metrics with R-R intervals and BPM heart rate measures

    PubMed Central

    Wallot, Sebastian; Fusaroli, Riccardo; Tylén, Kristian; Jegindø, Else-Marie

    2013-01-01

    Lately, growing attention in the health sciences has been paid to the dynamics of heart rate as indicator of impending failures and for prognoses. Likewise, in social and cognitive sciences, heart rate is increasingly employed as a measure of arousal, emotional engagement and as a marker of interpersonal coordination. However, there is no consensus about which measurements and analytical tools are most appropriate in mapping the temporal dynamics of heart rate and quite different metrics are reported in the literature. As complexity metrics of heart rate variability depend critically on variability of the data, different choices regarding the kind of measures can have a substantial impact on the results. In this article we compare linear and non-linear statistics on two prominent types of heart beat data, beat-to-beat intervals (R-R interval) and beats-per-min (BPM). As a proof-of-concept, we employ a simple rest-exercise-rest task and show that non-linear statistics—fractal (DFA) and recurrence (RQA) analyses—reveal information about heart beat activity above and beyond the simple level of heart rate. Non-linear statistics unveil sustained post-exercise effects on heart rate dynamics, but their power to do so critically depends on the type data that is employed: While R-R intervals are very susceptible to non-linear analyses, the success of non-linear methods for BPM data critically depends on their construction. Generally, “oversampled” BPM time-series can be recommended as they retain most of the information about non-linear aspects of heart beat dynamics. PMID:23964244

  3. Linear ketenimines. Variable structures of C,C-dicyanoketenimines and C,C-bis-sulfonylketenimines.

    PubMed

    Finnerty, Justin; Mitschke, Ullrich; Wentrup, Curt

    2002-02-22

    C,C-dicyanoketenimines 10a-c were generated by flash vacuum thermolysis of ketene N,S-acetals 9a-c or by thermal or photochemical decomposition of alpha-azido-beta-cyanocinnamonitrile 11. In the latter reaction, 3,3-dicyano-2-phenyl-1-azirine 12 is also formed. IR spectroscopy of the keteniminines isolated in Ar matrixes or as neat films, NMR spectroscopy of 10c, and theoretical calculations (B3LYP/6-31G) demonstrate that these ketenimines have variable geometry, being essentially linear along the CCN-R framework in polar media (neat films and solution), but in the gas phase or Ar matrix they are bent, as is usual for ketenimines. Experiments and calculations agree that a single CN substituent as in 13 is not enough to enforce linearity, and sulfonyl groups are less effective that cyano groups in causing linearity. C,C-bis(methylsulfonyl)ketenimines 4-5 and a C-cyano-C-(methylsulfonyl)ketenimine 15 are not linear. The compound p-O2NC6H4N=C=C(COOMe)2 previously reported in the literature is probably somewhat linearized along the CCNR moiety. A computational survey (B3LYP/6-31G) of the inversion barrier at nitrogen indicates that electronegative C-substituents dramatically lower the barrier; this is also true of N-acyl substituents. Increasing polarity causes lower barriers. Although N-alkylbis(methylsulfonyl)ketenimines are not calculated to be linear, the barriers are so low that crystal lattice forces can induce planarity in N-methylbis(methylsulfonyl)ketenimine 3.

  4. Study of Environmental Data Complexity using Extreme Learning Machine

    NASA Astrophysics Data System (ADS)

    Leuenberger, Michael; Kanevski, Mikhail

    2017-04-01

    The main goals of environmental data science using machine learning algorithm deal, in a broad sense, around the calibration, the prediction and the visualization of hidden relationship between input and output variables. In order to optimize the models and to understand the phenomenon under study, the characterization of the complexity (at different levels) should be taken into account. Therefore, the identification of the linear or non-linear behavior between input and output variables adds valuable information for the knowledge of the phenomenon complexity. The present research highlights and investigates the different issues that can occur when identifying the complexity (linear/non-linear) of environmental data using machine learning algorithm. In particular, the main attention is paid to the description of a self-consistent methodology for the use of Extreme Learning Machines (ELM, Huang et al., 2006), which recently gained a great popularity. By applying two ELM models (with linear and non-linear activation functions) and by comparing their efficiency, quantification of the linearity can be evaluated. The considered approach is accompanied by simulated and real high dimensional and multivariate data case studies. In conclusion, the current challenges and future development in complexity quantification using environmental data mining are discussed. References - Huang, G.-B., Zhu, Q.-Y., Siew, C.-K., 2006. Extreme learning machine: theory and applications. Neurocomputing 70 (1-3), 489-501. - Kanevski, M., Pozdnoukhov, A., Timonin, V., 2009. Machine Learning for Spatial Environmental Data. EPFL Press; Lausanne, Switzerland, p.392. - Leuenberger, M., Kanevski, M., 2015. Extreme Learning Machines for spatial environmental data. Computers and Geosciences 85, 64-73.

  5. The effect of changes in sea surface temperature on linear growth of Porites coral in Ambon Bay

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Corvianawatie, Corry, E-mail: corvianawatie@students.itb.ac.id; Putri, Mutiara R., E-mail: mutiara.putri@fitb.itb.ac.id; Cahyarini, Sri Y., E-mail: yuda@geotek.lipi.go.id

    Coral is one of the most important organisms in the coral reef ecosystem. There are several factors affecting coral growth, one of them is changes in sea surface temperature (SST). The purpose of this research is to understand the influence of SST variability on the annual linear growth of Porites coral taken from Ambon Bay. The annual coral linear growth was calculated and compared to the annual SST from the Extended Reconstructed Sea Surface Temperature version 3b (ERSST v3b) model. Coral growth was calculated by using Coral X-radiograph Density System (CoralXDS) software. Coral sample X-radiographs were used as input data.more » Chronology was developed by calculating the coral’s annual growth bands. A pair of high and low density banding patterns observed in the coral’s X-radiograph represent one year of coral growth. The results of this study shows that Porites coral extents from 2001-2009 and had an average growth rate of 1.46 cm/year. Statistical analysis shows that the annual coral linear growth declined by 0.015 cm/year while the annual SST declined by 0.013°C/year. SST and the annual linear growth of Porites coral in the Ambon Bay is insignificantly correlated with r=0.304 (n=9, p>0.05). This indicates that annual SST variability does not significantly influence the linear growth of Porites coral from Ambon Bay. It is suggested that sedimentation load, salinity, pH or other environmental factors may affect annual linear coral growth.« less

  6. Computational and Experimental Study of Energetic Materials in a Counterflow Microgravity Environment

    NASA Technical Reports Server (NTRS)

    Takahashi, Fumiaki (Technical Monitor); Urban, David (Technical Monitor); Smooke, M. D.; Parr, T. P.; Hanson-Parr, D. M.; Yetter, R. A.; Risha, G.

    2004-01-01

    Counterflow diffusion flames are studied for various fuels flowing against decomposition products from solid ammonium perchlorate (AP) pellets in order to obtain fundamental understanding of composite propellant flame structure and chemistry. We illustrate this approach through a combined experimental and numerical study of a fuel mixture consisting of C2H4 CO + H2, and C2H2 + C2H4 flowing against solid AP. For these particular AP-fuel systems, the resulting flame zone simulates the various flame structures that are ex+ to exist between reaction products from Ap crystals and a hydrocarbon binder. As in all our experimental studies, quantitative species and temperature profiles have been measured between the fuel exit and AP surface. Species measured included CN, NH, NO, OH, N2, CO2, CO, H2, CO, HCl, and H2O. Temperature was measured using a thermocouple at the exit, spontaneous Raman scattering measurements throughout the flame, OH rotational population distributions, and NO vibrational population distributions. The burning rate of AP was also measured as a function of strain rate, given by the separation distance between the AP surface and the gaseous hydrocarbon fuel tube exit plane. This distance was nominally set at 5 mm, although studies have been performed for variations in separation distance. The measured 12 scalars are compared with predictions from a detailed gas-phase kinetics model consisting of 86 species and 531 reactions. Model predictions are found to be in good agreement with experiment and illustrate the type of kinetic features that may be expected to occur in propellants when AP particle size distributions are varied. Furthermore, the results constitute the continued development of a necessary database and validation of a comprehensive model for studying more complex AP-solid fuel systems in microgravity. Exploratory studies have also been performed with liquid and solid fuels at normal gravity. Because of melting (and hence dripping) and deep thermal wave penetration into the liquid, these experiments were found feasible, but not used for obtaining quantitative data. Microgravity experiments are needed to eliminate the dripping and boiling phenomena of these systems at normal gravity. Microgravity tests in the NASA Glenn 2.2 second drop tower were performed (1) to demonstrate the feasibility of performing propellant experiments using the NASA Glenn microgravity facilities, (2) to develop the operational procedures for safe handing of the energetic materials and disposal of their toxic combustion by-products and (3) to obtain initial measurements of the AP burning rate and flame structure under microgravity conditions. Experiments were conducted on the CH4/AP system previously studied at normal gravity using a modified design of the counterflow burner and a NASA Glenn Pig Rig, i.e., one of the existing drop rigs for general-purpose usage. In these experiments, the AP burning rate was measured directly with a linear variable differential transducer (LVDT) and video imaging of the flame structure was recorded ignition was achieved by hot wires stretched across the AP surfaces. Initial drop tower combustion data show that with the same burner separation distance and flow conditions of the normal gravity experiments, the AP burning rate is approximately a factor of two lower. This difference is likely a result of radiation effects, but further tests with longer test times need to be conducted to verify that steady state conditions were achieved under microgravity conditions.

  7. Linear dynamical modes as new variables for data-driven ENSO forecast

    NASA Astrophysics Data System (ADS)

    Gavrilov, Andrey; Seleznev, Aleksei; Mukhin, Dmitry; Loskutov, Evgeny; Feigin, Alexander; Kurths, Juergen

    2018-05-01

    A new data-driven model for analysis and prediction of spatially distributed time series is proposed. The model is based on a linear dynamical mode (LDM) decomposition of the observed data which is derived from a recently developed nonlinear dimensionality reduction approach. The key point of this approach is its ability to take into account simple dynamical properties of the observed system by means of revealing the system's dominant time scales. The LDMs are used as new variables for empirical construction of a nonlinear stochastic evolution operator. The method is applied to the sea surface temperature anomaly field in the tropical belt where the El Nino Southern Oscillation (ENSO) is the main mode of variability. The advantage of LDMs versus traditionally used empirical orthogonal function decomposition is demonstrated for this data. Specifically, it is shown that the new model has a competitive ENSO forecast skill in comparison with the other existing ENSO models.

  8. Probabilistic finite elements for transient analysis in nonlinear continua

    NASA Technical Reports Server (NTRS)

    Liu, W. K.; Belytschko, T.; Mani, A.

    1985-01-01

    The probabilistic finite element method (PFEM), which is a combination of finite element methods and second-moment analysis, is formulated for linear and nonlinear continua with inhomogeneous random fields. Analogous to the discretization of the displacement field in finite element methods, the random field is also discretized. The formulation is simplified by transforming the correlated variables to a set of uncorrelated variables through an eigenvalue orthogonalization. Furthermore, it is shown that a reduced set of the uncorrelated variables is sufficient for the second-moment analysis. Based on the linear formulation of the PFEM, the method is then extended to transient analysis in nonlinear continua. The accuracy and efficiency of the method is demonstrated by application to a one-dimensional, elastic/plastic wave propagation problem. The moments calculated compare favorably with those obtained by Monte Carlo simulation. Also, the procedure is amenable to implementation in deterministic FEM based computer programs.

  9. Systems, methods, and software for determining spatially variable distributions of the dielectric properties of a heterogeneous material

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Farrington, Stephen P.

    Systems, methods, and software for measuring the spatially variable relative dielectric permittivity of materials along a linear or otherwise configured sensor element, and more specifically the spatial variability of soil moisture in one dimension as inferred from the dielectric profile of the soil matrix surrounding a linear sensor element. Various methods provided herein combine advances in the processing of time domain reflectometry data with innovations in physical sensing apparatuses. These advancements enable high temporal (and thus spatial) resolution of electrical reflectance continuously along an insulated waveguide that is permanently emplaced in contact with adjacent soils. The spatially resolved reflectance ismore » directly related to impedance changes along the waveguide that are dominated by electrical permittivity contrast due to variations in soil moisture. Various methods described herein are thus able to monitor soil moisture in profile with high spatial resolution.« less

  10. Linear solvation energy relationships (LSER): 'rules of thumb' for Vi/100, π*, Βm, and αm estimation and use in aquatic toxicology

    USGS Publications Warehouse

    Hickey, James P.

    1996-01-01

    This chapter provides a listing of the increasing variety of organic moieties and heteroatom group for which Linear Solvation Energy Relationship (LSER) values are available, and the LSER variable estimation rules. The listings include values for typical nitrogen-, sulfur- and phosphorus-containing moieties, and general organosilicon and organotin groups. The contributions by an ion pair situation to the LSER values are also offered in Table 1, allowing estimation of parameters for salts and zwitterions. The guidelines permit quick estimation of values for the four primary LSER variables Vi/100, π*, Βm, and αm by summing the contribtuions from its components. The use of guidelines and Table 1 significantly simplifies computation of values for the LSER variables for most possible organic comppounds in the environment, including the larger compounds of environmental and biological interest.

  11. A variable structure approach to robust control of VTOL aircraft

    NASA Technical Reports Server (NTRS)

    Calise, A. J.; Kramer, F.

    1982-01-01

    This paper examines the application of variable structure control theory to the design of a flight control system for the AV-8A Harrier in a hover mode. The objective in variable structure design is to confine the motion to a subspace of the total state space. The motion in this subspace is insensitive to system parameter variations and external disturbances that lie in the range space of the control. A switching type of control law results from the design procedure. The control system was designed to track a vector velocity command defined in the body frame. For comparison purposes, a proportional controller was designed using optimal linear regulator theory. Both control designs were first evaluated for transient response performance using a linearized model, then a nonlinear simulation study of a hovering approach to landing was conducted. Wind turbulence was modeled using a 1052 destroyer class air wake model.

  12. Spatial generalised linear mixed models based on distances.

    PubMed

    Melo, Oscar O; Mateu, Jorge; Melo, Carlos E

    2016-10-01

    Risk models derived from environmental data have been widely shown to be effective in delineating geographical areas of risk because they are intuitively easy to understand. We present a new method based on distances, which allows the modelling of continuous and non-continuous random variables through distance-based spatial generalised linear mixed models. The parameters are estimated using Markov chain Monte Carlo maximum likelihood, which is a feasible and a useful technique. The proposed method depends on a detrending step built from continuous or categorical explanatory variables, or a mixture among them, by using an appropriate Euclidean distance. The method is illustrated through the analysis of the variation in the prevalence of Loa loa among a sample of village residents in Cameroon, where the explanatory variables included elevation, together with maximum normalised-difference vegetation index and the standard deviation of normalised-difference vegetation index calculated from repeated satellite scans over time. © The Author(s) 2013.

  13. An Integrated Method to Analyze Farm Vulnerability to Climatic and Economic Variability According to Farm Configurations and Farmers' Adaptations.

    PubMed

    Martin, Guillaume; Magne, Marie-Angélina; Cristobal, Magali San

    2017-01-01

    The need to adapt to decrease farm vulnerability to adverse contextual events has been extensively discussed on a theoretical basis. We developed an integrated and operational method to assess farm vulnerability to multiple and interacting contextual changes and explain how this vulnerability can best be reduced according to farm configurations and farmers' technical adaptations over time. Our method considers farm vulnerability as a function of the raw measurements of vulnerability variables (e.g., economic efficiency of production), the slope of the linear regression of these measurements over time, and the residuals of this linear regression. The last two are extracted from linear mixed models considering a random regression coefficient (an intercept common to all farms), a global trend (a slope common to all farms), a random deviation from the general mean for each farm, and a random deviation from the general trend for each farm. Among all possible combinations, the lowest farm vulnerability is obtained through a combination of high values of measurements, a stable or increasing trend and low variability for all vulnerability variables considered. Our method enables relating the measurements, trends and residuals of vulnerability variables to explanatory variables that illustrate farm exposure to climatic and economic variability, initial farm configurations and farmers' technical adaptations over time. We applied our method to 19 cattle (beef, dairy, and mixed) farms over the period 2008-2013. Selected vulnerability variables, i.e., farm productivity and economic efficiency, varied greatly among cattle farms and across years, with means ranging from 43.0 to 270.0 kg protein/ha and 29.4-66.0% efficiency, respectively. No farm had a high level, stable or increasing trend and low residuals for both farm productivity and economic efficiency of production. Thus, the least vulnerable farms represented a compromise among measurement value, trend, and variability of both performances. No specific combination of farmers' practices emerged for reducing cattle farm vulnerability to climatic and economic variability. In the least vulnerable farms, the practices implemented (stocking rate, input use…) were more consistent with the objective of developing the properties targeted (efficiency, robustness…). Our method can be used to support farmers with sector-specific and local insights about most promising farm adaptations.

  14. An Integrated Method to Analyze Farm Vulnerability to Climatic and Economic Variability According to Farm Configurations and Farmers’ Adaptations

    PubMed Central

    Martin, Guillaume; Magne, Marie-Angélina; Cristobal, Magali San

    2017-01-01

    The need to adapt to decrease farm vulnerability to adverse contextual events has been extensively discussed on a theoretical basis. We developed an integrated and operational method to assess farm vulnerability to multiple and interacting contextual changes and explain how this vulnerability can best be reduced according to farm configurations and farmers’ technical adaptations over time. Our method considers farm vulnerability as a function of the raw measurements of vulnerability variables (e.g., economic efficiency of production), the slope of the linear regression of these measurements over time, and the residuals of this linear regression. The last two are extracted from linear mixed models considering a random regression coefficient (an intercept common to all farms), a global trend (a slope common to all farms), a random deviation from the general mean for each farm, and a random deviation from the general trend for each farm. Among all possible combinations, the lowest farm vulnerability is obtained through a combination of high values of measurements, a stable or increasing trend and low variability for all vulnerability variables considered. Our method enables relating the measurements, trends and residuals of vulnerability variables to explanatory variables that illustrate farm exposure to climatic and economic variability, initial farm configurations and farmers’ technical adaptations over time. We applied our method to 19 cattle (beef, dairy, and mixed) farms over the period 2008–2013. Selected vulnerability variables, i.e., farm productivity and economic efficiency, varied greatly among cattle farms and across years, with means ranging from 43.0 to 270.0 kg protein/ha and 29.4–66.0% efficiency, respectively. No farm had a high level, stable or increasing trend and low residuals for both farm productivity and economic efficiency of production. Thus, the least vulnerable farms represented a compromise among measurement value, trend, and variability of both performances. No specific combination of farmers’ practices emerged for reducing cattle farm vulnerability to climatic and economic variability. In the least vulnerable farms, the practices implemented (stocking rate, input use…) were more consistent with the objective of developing the properties targeted (efficiency, robustness…). Our method can be used to support farmers with sector-specific and local insights about most promising farm adaptations. PMID:28900435

  15. Latent log-linear models for handwritten digit classification.

    PubMed

    Deselaers, Thomas; Gass, Tobias; Heigold, Georg; Ney, Hermann

    2012-06-01

    We present latent log-linear models, an extension of log-linear models incorporating latent variables, and we propose two applications thereof: log-linear mixture models and image deformation-aware log-linear models. The resulting models are fully discriminative, can be trained efficiently, and the model complexity can be controlled. Log-linear mixture models offer additional flexibility within the log-linear modeling framework. Unlike previous approaches, the image deformation-aware model directly considers image deformations and allows for a discriminative training of the deformation parameters. Both are trained using alternating optimization. For certain variants, convergence to a stationary point is guaranteed and, in practice, even variants without this guarantee converge and find models that perform well. We tune the methods on the USPS data set and evaluate on the MNIST data set, demonstrating the generalization capabilities of our proposed models. Our models, although using significantly fewer parameters, are able to obtain competitive results with models proposed in the literature.

  16. Combining information from 3 anatomic regions in the diagnosis of glaucoma with time-domain optical coherence tomography.

    PubMed

    Wang, Mingwu; Lu, Ake Tzu-Hui; Varma, Rohit; Schuman, Joel S; Greenfield, David S; Huang, David

    2014-03-01

    To improve the diagnosis of glaucoma by combining time-domain optical coherence tomography (TD-OCT) measurements of the optic disc, circumpapillary retinal nerve fiber layer (RNFL), and macular retinal thickness. Ninety-six age-matched normal and 96 perimetric glaucoma participants were included in this observational, cross-sectional study. Or-logic, support vector machine, relevance vector machine, and linear discrimination function were used to analyze the performances of combined TD-OCT diagnostic variables. The area under the receiver-operating curve (AROC) was used to evaluate the diagnostic accuracy and to compare the diagnostic performance of single and combined anatomic variables. The best RNFL thickness variables were the inferior (AROC=0.900), overall (AROC=0.892), and superior quadrants (AROC=0.850). The best optic disc variables were horizontal integrated rim width (AROC=0.909), vertical integrated rim area (AROC=0.908), and cup/disc vertical ratio (AROC=0.890). All macular retinal thickness variables had AROCs of 0.829 or less. Combining the top 3 RNFL and optic disc variables in optimizing glaucoma diagnosis, support vector machine had the highest AROC, 0.954, followed by or-logic (AROC=0.946), linear discrimination function (AROC=0.946), and relevance vector machine (AROC=0.943). All combination diagnostic variables had significantly larger AROCs than any single diagnostic variable. There are no significant differences among the combination diagnostic indices. With TD-OCT, RNFL and optic disc variables had better diagnostic accuracy than macular retinal variables. Combining top RNFL and optic disc variables significantly improved diagnostic performance. Clinically, or-logic classification was the most practical analytical tool with sufficient accuracy to diagnose early glaucoma.

  17. Vagal-dependent nonlinear variability in the respiratory pattern of anesthetized, spontaneously breathing rats

    PubMed Central

    Dhingra, R. R.; Jacono, F. J.; Fishman, M.; Loparo, K. A.; Rybak, I. A.

    2011-01-01

    Physiological rhythms, including respiration, exhibit endogenous variability associated with health, and deviations from this are associated with disease. Specific changes in the linear and nonlinear sources of breathing variability have not been investigated. In this study, we used information theory-based techniques, combined with surrogate data testing, to quantify and characterize the vagal-dependent nonlinear pattern variability in urethane-anesthetized, spontaneously breathing adult rats. Surrogate data sets preserved the amplitude distribution and linear correlations of the original data set, but nonlinear correlation structure in the data was removed. Differences in mutual information and sample entropy between original and surrogate data sets indicated the presence of deterministic nonlinear or stochastic non-Gaussian variability. With vagi intact (n = 11), the respiratory cycle exhibited significant nonlinear behavior in templates of points separated by time delays ranging from one sample to one cycle length. After vagotomy (n = 6), even though nonlinear variability was reduced significantly, nonlinear properties were still evident at various time delays. Nonlinear deterministic variability did not change further after subsequent bilateral microinjection of MK-801, an N-methyl-d-aspartate receptor antagonist, in the Kölliker-Fuse nuclei. Reversing the sequence (n = 5), blocking N-methyl-d-aspartate receptors bilaterally in the dorsolateral pons significantly decreased nonlinear variability in the respiratory pattern, even with the vagi intact, and subsequent vagotomy did not change nonlinear variability. Thus both vagal and dorsolateral pontine influences contribute to nonlinear respiratory pattern variability. Furthermore, breathing dynamics of the intact system are mutually dependent on vagal and pontine sources of nonlinear complexity. Understanding the structure and modulation of variability provides insight into disease effects on respiratory patterning. PMID:21527661

  18. The effect of virtual reality on gait variability.

    PubMed

    Katsavelis, Dimitrios; Mukherjee, Mukul; Decker, Leslie; Stergiou, Nicholas

    2010-07-01

    Optic Flow (OF) plays an important role in human locomotion and manipulation of OF characteristics can cause changes in locomotion patterns. The purpose of the study was to investigate the effect of the velocity of optic flow on the amount and structure of gait variability. Each subject underwent four conditions of treadmill walking at their self-selected pace. In three conditions the subjects walked in an endless virtual corridor, while a fourth control condition was also included. The three virtual conditions differed in the speed of the optic flow displayed as follows--same speed (OFn), faster (OFf), and slower (OFs) than that of the treadmill. Gait kinematics were tracked with an optical motion capture system. Gait variability measures of the hip, knee and ankle range of motion and stride interval were analyzed. Amount of variability was evaluated with linear measures of variability--coefficient of variation, while structure of variability i.e., its organization over time, were measured with nonlinear measures--approximate entropy and detrended fluctuation analysis. The linear measures of variability, CV, did not show significant differences between Non-VR and VR conditions while nonlinear measures of variability identified significant differences at the hip, ankle, and in stride interval. In response to manipulation of the optic flow, significant differences were observed between the three virtual conditions in the following order: OFn greater than OFf greater than OFs. Measures of structure of variability are more sensitive to changes in gait due to manipulation of visual cues, whereas measures of the amount of variability may be concealed by adaptive mechanisms. Visual cues increase the complexity of gait variability and may increase the degrees of freedom available to the subject. Further exploration of the effects of optic flow manipulation on locomotion may provide us with an effective tool for rehabilitation of subjects with sensorimotor issues.

  19. Normal forms for reduced stochastic climate models

    PubMed Central

    Majda, Andrew J.; Franzke, Christian; Crommelin, Daan

    2009-01-01

    The systematic development of reduced low-dimensional stochastic climate models from observations or comprehensive high-dimensional climate models is an important topic for atmospheric low-frequency variability, climate sensitivity, and improved extended range forecasting. Here techniques from applied mathematics are utilized to systematically derive normal forms for reduced stochastic climate models for low-frequency variables. The use of a few Empirical Orthogonal Functions (EOFs) (also known as Principal Component Analysis, Karhunen–Loéve and Proper Orthogonal Decomposition) depending on observational data to span the low-frequency subspace requires the assessment of dyad interactions besides the more familiar triads in the interaction between the low- and high-frequency subspaces of the dynamics. It is shown below that the dyad and multiplicative triad interactions combine with the climatological linear operator interactions to simultaneously produce both strong nonlinear dissipation and Correlated Additive and Multiplicative (CAM) stochastic noise. For a single low-frequency variable the dyad interactions and climatological linear operator alone produce a normal form with CAM noise from advection of the large scales by the small scales and simultaneously strong cubic damping. These normal forms should prove useful for developing systematic strategies for the estimation of stochastic models from climate data. As an illustrative example the one-dimensional normal form is applied below to low-frequency patterns such as the North Atlantic Oscillation (NAO) in a climate model. The results here also illustrate the short comings of a recent linear scalar CAM noise model proposed elsewhere for low-frequency variability. PMID:19228943

  20. Decomposition and model selection for large contingency tables.

    PubMed

    Dahinden, Corinne; Kalisch, Markus; Bühlmann, Peter

    2010-04-01

    Large contingency tables summarizing categorical variables arise in many areas. One example is in biology, where large numbers of biomarkers are cross-tabulated according to their discrete expression level. Interactions of the variables are of great interest and are generally studied with log-linear models. The structure of a log-linear model can be visually represented by a graph from which the conditional independence structure can then be easily read off. However, since the number of parameters in a saturated model grows exponentially in the number of variables, this generally comes with a heavy computational burden. Even if we restrict ourselves to models of lower-order interactions or other sparse structures, we are faced with the problem of a large number of cells which play the role of sample size. This is in sharp contrast to high-dimensional regression or classification procedures because, in addition to a high-dimensional parameter, we also have to deal with the analogue of a huge sample size. Furthermore, high-dimensional tables naturally feature a large number of sampling zeros which often leads to the nonexistence of the maximum likelihood estimate. We therefore present a decomposition approach, where we first divide the problem into several lower-dimensional problems and then combine these to form a global solution. Our methodology is computationally feasible for log-linear interaction models with many categorical variables each or some of them having many levels. We demonstrate the proposed method on simulated data and apply it to a bio-medical problem in cancer research.

  1. A linear stepping endovascular intervention robot with variable stiffness and force sensing.

    PubMed

    He, Chengbin; Wang, Shuxin; Zuo, Siyang

    2018-05-01

    Robotic-assisted endovascular intervention surgery has attracted significant attention and interest in recent years. However, limited designs have focused on the variable stiffness mechanism of the catheter shaft. Flexible catheter needs to be partially switched to a rigid state that can hold its shape against external force to achieve a stable and effective insertion procedure. Furthermore, driving catheter in a similar way with manual procedures has the potential to make full use of the extensive experience from conventional catheter navigation. Besides driving method, force sensing is another significant factor for endovascular intervention. This paper presents a variable stiffness catheterization system that can provide stable and accurate endovascular intervention procedure with a linear stepping mechanism that has a similar operation mode to the conventional catheter navigation. A specially designed shape-memory polymer tube with water cooling structure is used to achieve variable stiffness of the catheter. Hence, four FBG sensors are attached to the catheter tip in order to monitor the tip contact force situation with temperature compensation. Experimental results show that the actuation unit is able to deliver linear and rotational motions. We have shown the feasibility of FBG force sensing to reduce the effect of temperature and detect the tip contact force. The designed catheter can change its stiffness partially, and the stiffness of the catheter can be remarkably increased in rigid state. Hence, in the rigid state, the catheter can hold its shape against a [Formula: see text] load. The prototype has also been validated with a vascular phantom, demonstrating the potential clinical value of the system. The proposed system provides important insights into the design of compact robotic-assisted catheter incorporating effective variable stiffness mechanism and real-time force sensing for intraoperative endovascular intervention.

  2. Estimating severity of sideways fall using a generic multi linear regression model based on kinematic input variables.

    PubMed

    van der Zijden, A M; Groen, B E; Tanck, E; Nienhuis, B; Verdonschot, N; Weerdesteyn, V

    2017-03-21

    Many research groups have studied fall impact mechanics to understand how fall severity can be reduced to prevent hip fractures. Yet, direct impact force measurements with force plates are restricted to a very limited repertoire of experimental falls. The purpose of this study was to develop a generic model for estimating hip impact forces (i.e. fall severity) in in vivo sideways falls without the use of force plates. Twelve experienced judokas performed sideways Martial Arts (MA) and Block ('natural') falls on a force plate, both with and without a mat on top. Data were analyzed to determine the hip impact force and to derive 11 selected (subject-specific and kinematic) variables. Falls from kneeling height were used to perform a stepwise regression procedure to assess the effects of these input variables and build the model. The final model includes four input variables, involving one subject-specific measure and three kinematic variables: maximum upper body deceleration, body mass, shoulder angle at the instant of 'maximum impact' and maximum hip deceleration. The results showed that estimated and measured hip impact forces were linearly related (explained variances ranging from 46 to 63%). Hip impact forces of MA falls onto the mat from a standing position (3650±916N) estimated by the final model were comparable with measured values (3698±689N), even though these data were not used for training the model. In conclusion, a generic linear regression model was developed that enables the assessment of fall severity through kinematic measures of sideways falls, without using force plates. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Relationship between rice yield and climate variables in southwest Nigeria using multiple linear regression and support vector machine analysis

    NASA Astrophysics Data System (ADS)

    Oguntunde, Philip G.; Lischeid, Gunnar; Dietrich, Ottfried

    2018-03-01

    This study examines the variations of climate variables and rice yield and quantifies the relationships among them using multiple linear regression, principal component analysis, and support vector machine (SVM) analysis in southwest Nigeria. The climate and yield data used was for a period of 36 years between 1980 and 2015. Similar to the observed decrease ( P < 0.001) in rice yield, pan evaporation, solar radiation, and wind speed declined significantly. Eight principal components exhibited an eigenvalue > 1 and explained 83.1% of the total variance of predictor variables. The SVM regression function using the scores of the first principal component explained about 75% of the variance in rice yield data and linear regression about 64%. SVM regression between annual solar radiation values and yield explained 67% of the variance. Only the first component of the principal component analysis (PCA) exhibited a clear long-term trend and sometimes short-term variance similar to that of rice yield. Short-term fluctuations of the scores of the PC1 are closely coupled to those of rice yield during the 1986-1993 and the 2006-2013 periods thereby revealing the inter-annual sensitivity of rice production to climate variability. Solar radiation stands out as the climate variable of highest influence on rice yield, and the influence was especially strong during monsoon and post-monsoon periods, which correspond to the vegetative, booting, flowering, and grain filling stages in the study area. The outcome is expected to provide more in-depth regional-specific climate-rice linkage for screening of better cultivars that can positively respond to future climate fluctuations as well as providing information that may help optimized planting dates for improved radiation use efficiency in the study area.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Haixia; Zhang, Jing

    We propose a scheme for continuous-variable quantum cloning of coherent states with phase-conjugate input modes using linear optics. The quantum cloning machine yields M identical optimal clones from N replicas of a coherent state and N replicas of its phase conjugate. This scheme can be straightforwardly implemented with the setups accessible at present since its optical implementation only employs simple linear optical elements and homodyne detection. Compared with the original scheme for continuous-variable quantum cloning with phase-conjugate input modes proposed by Cerf and Iblisdir [Phys. Rev. Lett. 87, 247903 (2001)], which utilized a nondegenerate optical parametric amplifier, our scheme losesmore » the output of phase-conjugate clones and is regarded as irreversible quantum cloning.« less

  5. Suspension system vibration analysis with regard to variable type ability to smooth road irregularities

    NASA Astrophysics Data System (ADS)

    Rykov, S. P.; Rykova, O. A.; Koval, V. S.; Makhno, D. E.; Fedotov, K. V.

    2018-03-01

    The paper aims to analyze vibrations of the dynamic system equivalent of the suspension system with regard to tyre ability to smooth road irregularities. The research is based on static dynamics for linear systems of automated control, methods of correlation, spectral and numerical analysis. Input of new data on the smoothing effect of the pneumatic tyre reflecting changes of a contact area between the wheel and road under vibrations of the suspension makes the system non-linear which requires using numerical analysis methods. Taking into account the variable smoothing ability of the tyre when calculating suspension vibrations, one can approximate calculation and experimental results and improve the constant smoothing ability of the tyre.

  6. Analytical modeling and tolerance analysis of a linear variable filter for spectral order sorting.

    PubMed

    Ko, Cheng-Hao; Chang, Kuei-Ying; Huang, You-Min

    2015-02-23

    This paper proposes an innovative method to overcome the low production rate of current linear variable filter (LVF) fabrication. During the fabrication process, a commercial coater is combined with a local mask on a substrate. The proposed analytical thin film thickness model, which is based on the geometry of the commercial coater, is developed to more effectively calculate the profiles of LVFs. Thickness tolerance, LVF zone width, thin film layer structure, transmission spectrum and the effects of variations in critical parameters of the coater are analyzed. Profile measurements demonstrate the efficacy of local mask theory in the prediction of evaporation profiles with a high degree of accuracy.

  7. Estimation in Linear Systems Featuring Correlated Uncertain Observations Coming from Multiple Sensors

    NASA Astrophysics Data System (ADS)

    Caballero-Águila, R.; Hermoso-Carazo, A.; Linares-Pérez, J.

    2009-08-01

    In this paper, the state least-squares linear estimation problem from correlated uncertain observations coming from multiple sensors is addressed. It is assumed that, at each sensor, the state is measured in the presence of additive white noise and that the uncertainty in the observations is characterized by a set of Bernoulli random variables which are only correlated at consecutive time instants. Assuming that the statistical properties of such variables are not necessarily the same for all the sensors, a recursive filtering algorithm is proposed, and the performance of the estimators is illustrated by a numerical simulation example wherein a signal is estimated from correlated uncertain observations coming from two sensors with different uncertainty characteristics.

  8. Highlights of the LINEAR survey

    NASA Astrophysics Data System (ADS)

    Palaversa, L.

    2014-07-01

    Lincoln Near-Earth Asteroid Research asteroid survey (LINEAR) observed proximately 10,000 deg2 of the northern sky in period roughly from 1998 to 2013. Long baseline of observations combined with good cadence and depth (14.5 < rSDSS < 17.5) provides excellent basis for investigation of variable and transient objects in this relatively faint and underexplored part of the sky. Details covering the repurposing of this survey for use in time domain astronomy, creation of a highly reliable catalogue of approximately 7,200 periodically variable stars (RR Lyrae, eclipsing binaries, SX Phe stars and LPVs) as well as search for optical signatures of exotic transient events (such as tidal disruption event candidates), are presented.

  9. Solving the linear inviscid shallow water equations in one dimension, with variable depth, using a recursion formula

    NASA Astrophysics Data System (ADS)

    Hernandez-Walls, R.; Martín-Atienza, B.; Salinas-Matus, M.; Castillo, J.

    2017-11-01

    When solving the linear inviscid shallow water equations with variable depth in one dimension using finite differences, a tridiagonal system of equations must be solved. Here we present an approach, which is more efficient than the commonly used numerical method, to solve this tridiagonal system of equations using a recursion formula. We illustrate this approach with an example in which we solve for a rectangular channel to find the resonance modes. Our numerical solution agrees very well with the analytical solution. This new method is easy to use and understand by undergraduate students, so it can be implemented in undergraduate courses such as Numerical Methods, Lineal Algebra or Differential Equations.

  10. On Matrices, Automata, and Double Counting

    NASA Astrophysics Data System (ADS)

    Beldiceanu, Nicolas; Carlsson, Mats; Flener, Pierre; Pearson, Justin

    Matrix models are ubiquitous for constraint problems. Many such problems have a matrix of variables M, with the same constraint defined by a finite-state automaton A on each row of M and a global cardinality constraint gcc on each column of M. We give two methods for deriving, by double counting, necessary conditions on the cardinality variables of the gcc constraints from the automaton A. The first method yields linear necessary conditions and simple arithmetic constraints. The second method introduces the cardinality automaton, which abstracts the overall behaviour of all the row automata and can be encoded by a set of linear constraints. We evaluate the impact of our methods on a large set of nurse rostering problem instances.

  11. Morphometric variability of Arctodiaptomus salinus (Copepoda) in the Mediterranean-Black Sea region

    PubMed Central

    ANUFRIIEVA, Elena V.; SHADRIN, Nickolai V.

    2015-01-01

    Inter-species variability in morphological traits creates a need to know the range of variability of characteristics in the species for taxonomic and ecological tasks. Copepoda Arctodiaptomus salinus, which inhabits water bodies across Eurasia and North Africa, plays a dominant role in plankton of different water bodies-from fresh to hypersaline. This work assesses the intra- and inter-population morphometric variability of A. salinus in the Mediterranean-Black Sea region and discusses some observed regularities. The variability of linear body parameters and proportions was studied. The impacts of salinity, temperature, and population density on morphological characteristics and their variability can manifest themselves in different ways at the intra- and inter-population levels. A significant effect of salinity, pH and temperature on the body proportions was not found. Their intra-population variability is dependent on temperature and salinity. Sexual dimorphism of A. salinus manifests in different linear parameters, proportions, and their variability. There were no effects of temperature, pH and salinity on the female/male parameter ratio. There were significant differences in the body proportions of males and females in different populations. The influence of temperature, salinity, and population density can be attributed to 80%-90% of intra-population variability of A. salinus. However, these factors can explain less than 40% of inter-population differences. Significant differences in the body proportions of males and females from different populations may suggest that some local populations of A. salinus in the Mediterranean-Black Sea region are in the initial stages of differentiation. PMID:26646569

  12. Example-Based Learning: Exploring the Use of Matrices and Problem Variability

    ERIC Educational Resources Information Center

    Hancock-Niemic, Mary A.; Lin, Lijia; Atkinson, Robert K.; Renkl, Alexander; Wittwer, Joerg

    2016-01-01

    The purpose of the study was to investigate the efficacy of using faded worked examples presented in matrices with problem structure variability to enhance learners' ability to recognize the underlying structure of the problems. Specifically, this study compared the effects of matrix-format versus linear-format faded worked examples combined with…

  13. Impact of Preadmission Variables on USMLE Step 1 and Step 2 Performance

    ERIC Educational Resources Information Center

    Kleshinski, James; Khuder, Sadik A.; Shapiro, Joseph I.; Gold, Jeffrey P.

    2009-01-01

    Purpose: To examine the predictive ability of preadmission variables on United States Medical Licensing Examinations (USMLE) step 1 and step 2 performance, incorporating the use of a neural network model. Method: Preadmission data were collected on matriculants from 1998 to 2004. Linear regression analysis was first used to identify predictors of…

  14. The Use of Structure Coefficients to Address Multicollinearity in Sport and Exercise Science

    ERIC Educational Resources Information Center

    Yeatts, Paul E.; Barton, Mitch; Henson, Robin K.; Martin, Scott B.

    2017-01-01

    A common practice in general linear model (GLM) analyses is to interpret regression coefficients (e.g., standardized ß weights) as indicators of variable importance. However, focusing solely on standardized beta weights may provide limited or erroneous information. For example, ß weights become increasingly unreliable when predictor variables are…

  15. Bayesian Model Comparison for the Order Restricted RC Association Model

    ERIC Educational Resources Information Center

    Iliopoulos, G.; Kateri, M.; Ntzoufras, I.

    2009-01-01

    Association models constitute an attractive alternative to the usual log-linear models for modeling the dependence between classification variables. They impose special structure on the underlying association by assigning scores on the levels of each classification variable, which can be fixed or parametric. Under the general row-column (RC)…

  16. Variables Associated with Communicative Participation in People with Multiple Sclerosis: A Regression Analysis

    ERIC Educational Resources Information Center

    Baylor, Carolyn; Yorkston, Kathryn; Bamer, Alyssa; Britton, Deanna; Amtmann, Dagmar

    2010-01-01

    Purpose: To explore variables associated with self-reported communicative participation in a sample (n = 498) of community-dwelling adults with multiple sclerosis (MS). Method: A battery of questionnaires was administered online or on paper per participant preference. Data were analyzed using multiple linear backward stepwise regression. The…

  17. The Role of Schools, Families, and Psychological Variables on Math Achievement of Black High School Students

    ERIC Educational Resources Information Center

    Strayhorn, Terrell L.

    2010-01-01

    Using data from the National Education Longitudinal Study (NELS;1988/2000), the author conducted hierarchical linear regression analyses, with a nested design, to estimate the influence of affective variables--parent involvement, teacher perceptions, and school environments--on Black students' math achievement in grade 10. Drawing on…

  18. Logarithmic Transformations in Regression: Do You Transform Back Correctly?

    ERIC Educational Resources Information Center

    Dambolena, Ismael G.; Eriksen, Steven E.; Kopcso, David P.

    2009-01-01

    The logarithmic transformation is often used in regression analysis for a variety of purposes such as the linearization of a nonlinear relationship between two or more variables. We have noticed that when this transformation is applied to the response variable, the computation of the point estimate of the conditional mean of the original response…

  19. IN11B-1621: Quantifying How Climate Affects Vegetation in the Amazon Rainforest

    NASA Technical Reports Server (NTRS)

    Das, Kamalika; Kodali, Anuradha; Szubert, Marcin; Ganguly, Sangram; Bongard, Joshua

    2016-01-01

    Amazon droughts in 2005 and 2010 have raised serious concern about the future of the rainforest. Amazon forests are crucial because of their role as the largest carbon sink in the world which would effect the global warming phenomena with decreased photosynthesis activity. Especially, after a decline in plant growth in 1.68 million km2 forest area during the once-in-a-century severe drought in 2010, it is of primary importance to understand the relationship between different climatic variables and vegetation. In an earlier study, we have shown that non-linear models are better at capturing the relation dynamics of vegetation and climate variables such as temperature and precipitation, compared to linear models. In this research, we learn precise models between vegetation and climatic variables (temperature, precipitation) for normal conditions in the Amazon region using genetic programming based symbolic regression. This is done by removing high elevation and drought affected areas and also considering the slope of the region as one of the important factors while building the model. The model learned reveals new and interesting ways historical and current climate variables affect the vegetation at any location. MAIAC data has been used as a vegetation surrogate in our study. For temperature and precipitation, we have used TRMM and MODIS Land Surface Temperature data sets while learning the non-linear regression model. However, to generalize the model to make it independent of the data source, we perform transfer learning where we regress a regularized least squares to learn the parameters of the non-linear model using other data sources such as the precipitation and temperature from the Climatic Research Center (CRU). This new model is very similar in structure and performance compared to the original learned model and verifies the same claims about the nature of dependency between these climate variables and the vegetation in the Amazon region. As a result of this study, we are able to learn, for the very first time how exactly different climate factors influence vegetation at any location in the Amazon rainforests, independent of the specific sources from which the data has been obtained.

  20. Quantifying How Climate Affects Vegetation in the Amazon Rainforest

    NASA Astrophysics Data System (ADS)

    Das, K.; Kodali, A.; Szubert, M.; Ganguly, S.; Bongard, J.

    2016-12-01

    Amazon droughts in 2005 and 2010 have raised serious concern about the future of the rainforest. Amazon forests are crucial because of their role as the largest carbon sink in the world which would effect the global warming phenomena with decreased photosynthesis activity. Especially, after a decline in plant growth in 1.68 million km2 forest area during the once-in-a-century severe drought in 2010, it is of primary importance to understand the relationship between different climatic variables and vegetation. In an earlier study, we have shown that non-linear models are better at capturing the relation dynamics of vegetation and climate variables such as temperature and precipitation, compared to linear models. In this research, we learn precise models between vegetation and climatic variables (temperature, precipitation) for normal conditions in the Amazon region using genetic programming based symbolic regression. This is done by removing high elevation and drought affected areas and also considering the slope of the region as one of the important factors while building the model. The model learned reveals new and interesting ways historical and current climate variables affect the vegetation at any location. MAIAC data has been used as a vegetation surrogate in our study. For temperature and precipitation, we have used TRMM and MODIS Land Surface Temperature data sets while learning the non-linear regression model. However, to generalize the model to make it independent of the data source, we perform transfer learning where we regress a regularized least squares to learn the parameters of the non-linear model using other data sources such as the precipitation and temperature from the Climatic Research Center (CRU). This new model is very similar in structure and performance compared to the original learned model and verifies the same claims about the nature of dependency between these climate variables and the vegetation in the Amazon region. As a result of this study, we are able to learn, for the very first time how exactly different climate factors influence vegetation at any location in the Amazon rainforests, independent of the specific sources from which the data has been obtained.

  1. A Linear Regression Model Identifying the Primary Factors Contributing to Maintenance Man Hours for the C-17 Globemaster III in the Air National Guard

    DTIC Science & Technology

    2012-06-15

    Maintenance AFSCs ................................................................................................. 14 2. Variation Inflation Factors...total variability in the data. It is an indication of how much of the   20    variation in the data can be accounted for in the regression model. In... Variation Inflation Factors for each independent variable (predictor) as regressed against all of the other independent variables in the model. The

  2. Impacts analysis of car following models considering variable vehicular gap policies

    NASA Astrophysics Data System (ADS)

    Xin, Qi; Yang, Nan; Fu, Rui; Yu, Shaowei; Shi, Zhongke

    2018-07-01

    Due to the important roles playing in the vehicles' adaptive cruise control system, variable vehicular gap polices were employed to full velocity difference model (FVDM) to investigate the traffic flow properties. In this paper, two new car following models were put forward by taking constant time headway(CTH) policy and variable time headway(VTH) policy into optimal velocity function, separately. By steady state analysis of the new models, an equivalent optimal velocity function was defined. To determine the linear stable conditions of the new models, we introduce equivalent expressions of safe vehicular gap, and then apply small amplitude perturbation analysis and long terms of wave expansion techniques to obtain the new models' linear stable conditions. Additionally, the first order approximate solutions of the new models were drawn at the stable region, by transforming the models into typical Burger's partial differential equations with reductive perturbation method. The FVDM based numerical simulations indicate that the variable vehicular gap polices with proper parameters directly contribute to the improvement of the traffic flows' stability and the avoidance of the unstable traffic phenomena.

  3. Modelling long-term fire occurrence factors in Spain by accounting for local variations with geographically weighted regression

    NASA Astrophysics Data System (ADS)

    Martínez-Fernández, J.; Chuvieco, E.; Koutsias, N.

    2013-02-01

    Humans are responsible for most forest fires in Europe, but anthropogenic factors behind these events are still poorly understood. We tried to identify the driving factors of human-caused fire occurrence in Spain by applying two different statistical approaches. Firstly, assuming stationary processes for the whole country, we created models based on multiple linear regression and binary logistic regression to find factors associated with fire density and fire presence, respectively. Secondly, we used geographically weighted regression (GWR) to better understand and explore the local and regional variations of those factors behind human-caused fire occurrence. The number of human-caused fires occurring within a 25-yr period (1983-2007) was computed for each of the 7638 Spanish mainland municipalities, creating a binary variable (fire/no fire) to develop logistic models, and a continuous variable (fire density) to build standard linear regression models. A total of 383 657 fires were registered in the study dataset. The binary logistic model, which estimates the probability of having/not having a fire, successfully classified 76.4% of the total observations, while the ordinary least squares (OLS) regression model explained 53% of the variation of the fire density patterns (adjusted R2 = 0.53). Both approaches confirmed, in addition to forest and climatic variables, the importance of variables related with agrarian activities, land abandonment, rural population exodus and developmental processes as underlying factors of fire occurrence. For the GWR approach, the explanatory power of the GW linear model for fire density using an adaptive bandwidth increased from 53% to 67%, while for the GW logistic model the correctly classified observations improved only slightly, from 76.4% to 78.4%, but significantly according to the corrected Akaike Information Criterion (AICc), from 3451.19 to 3321.19. The results from GWR indicated a significant spatial variation in the local parameter estimates for all the variables and an important reduction of the autocorrelation in the residuals of the GW linear model. Despite the fitting improvement of local models, GW regression, more than an alternative to "global" or traditional regression modelling, seems to be a valuable complement to explore the non-stationary relationships between the response variable and the explanatory variables. The synergy of global and local modelling provides insights into fire management and policy and helps further our understanding of the fire problem over large areas while at the same time recognizing its local character.

  4. Climatic variability of river outflow in the Pantanal region and the influence of sea surface temperature

    NASA Astrophysics Data System (ADS)

    Silva, Carlos Batista; Silva, Maria Elisa Siqueira; Ambrizzi, Tércio

    2017-07-01

    This paper investigates possible linear relationships between climate, hydrology, and oceanic surface variability in the Pantanal region (in South America's central area), over interannual and interdecadal time ranges. In order to verify the mentioned relations, lagged correlation analysis and linear adjustment between river discharge at the Pantanal region and sea surface temperature were used. Composite analysis for atmospheric fields, air humidity flux divergence, and atmospheric circulation at low and high levels, for the period between 1970 and 2003, was analyzed. Results suggest that the river discharge in the Pantanal region is linearly associated with interdecadal and interannual oscillations in the Pacific and Atlantic oceans, making them good predictors to continental hydrological variables. Considering oceanic areas, 51 % of the annual discharge in the Pantanal region can be linearly explained by mean sea surface temperature (SST) in the Subtropical North Pacific, Tropical North Pacific, Extratropical South Pacific, and Extratropical North Atlantic over the period. Considering a forecast approach in seasonal scale, 66 % of the monthly discharge variance in Pantanal, 3 months ahead of SST, is explained by the oceanic variables, providing accuracy around 65 %. Annual discharge values in the Pantanal region are strongly related to the Pacific Decadal Oscillation (PDO) variability (with 52 % of linear correlation), making it possible to consider an interdecadal variability and a consequent subdivision of the whole period in three parts: 1st (1970-1977), 2nd (1978-1996), and 3rd (1997-2003) subperiods. The three subperiods coincide with distinct PDO phases: negative, positive, and negative, respectively. Convergence of humidity flux at low levels and the circulation pattern at high levels help to explain the drier and wetter subperiods. During the wetter 2nd subperiod, the air humidity convergence at low levels is much more evident than during the other two drier subperiods, which mostly show air humidity divergence. While the drier periods are particularly characterized by the strengthening of northerly wind over the center of South America, including the Pantanal region, the wetter period is characterized by its weakening. The circulation pattern at 850 hPa levels during the drier subperiods shows anticyclonic anomalies centered over east central South America. Also, the drier subperiods (1st and 3rd) are characterized by negative stream function anomalies over southeastern South America and adjacent South Atlantic, and the wetter subperiod is characterized by positive stream function anomalies. In the three subperiods, one can see mean atmospheric patterns associated with Rossby wave propagation coming from the South Pacific basin—similar to the Pacific South America pattern, but with reverse signals between the wetter and the drier periods. This result suggests a possible relationship between climatic patterns over southeastern South America regions and the Pacific conditions in a decadal scale.

  5. Method of operating a thermal engine powered by a chemical reaction

    DOEpatents

    Ross, John; Escher, Claus

    1988-01-01

    The invention involves a novel method of increasing the efficiency of a thermal engine. Heat is generated by a non-linear chemical reaction of reactants, said heat being transferred to a thermal engine such as Rankine cycle power plant. The novel method includes externally perturbing one or more of the thermodynamic variables of said non-linear chemical reaction.

  6. Method of operating a thermal engine powered by a chemical reaction

    DOEpatents

    Ross, J.; Escher, C.

    1988-06-07

    The invention involves a novel method of increasing the efficiency of a thermal engine. Heat is generated by a non-linear chemical reaction of reactants, said heat being transferred to a thermal engine such as Rankine cycle power plant. The novel method includes externally perturbing one or more of the thermodynamic variables of said non-linear chemical reaction. 7 figs.

  7. Building "e-rater"® Scoring Models Using Machine Learning Methods. Research Report. ETS RR-16-04

    ERIC Educational Resources Information Center

    Chen, Jing; Fife, James H.; Bejar, Isaac I.; Rupp, André A.

    2016-01-01

    The "e-rater"® automated scoring engine used at Educational Testing Service (ETS) scores the writing quality of essays. In the current practice, e-rater scores are generated via a multiple linear regression (MLR) model as a linear combination of various features evaluated for each essay and human scores as the outcome variable. This…

  8. Application of a local linearization technique for the solution of a system of stiff differential equations associated with the simulation of a magnetic bearing assembly

    NASA Technical Reports Server (NTRS)

    Kibler, K. S.; Mcdaniel, G. A.

    1981-01-01

    A digital local linearization technique was used to solve a system of stiff differential equations which simulate a magnetic bearing assembly. The results prove the technique to be accurate, stable, and efficient when compared to a general purpose variable order Adams method with a stiff option.

  9. Modeling Individual Damped Linear Oscillator Processes with Differential Equations: Using Surrogate Data Analysis to Estimate the Smoothing Parameter

    ERIC Educational Resources Information Center

    Deboeck, Pascal R.; Boker, Steven M.; Bergeman, C. S.

    2008-01-01

    Among the many methods available for modeling intraindividual time series, differential equation modeling has several advantages that make it promising for applications to psychological data. One interesting differential equation model is that of the damped linear oscillator (DLO), which can be used to model variables that have a tendency to…

  10. Applying Hierarchical Linear Models (HLM) to Estimate the School and Children's Effects on Reading Achievement

    ERIC Educational Resources Information Center

    Liu, Xing

    2008-01-01

    The purpose of this study was to illustrate the use of Hierarchical Linear Models (HLM) to investigate the effects of school and children's attributes on children' reading achievement. In particular, this study was designed to: (1) develop the HLM models to determine the effects of school-level and child-level variables on children's reading…

  11. Maximizing the Information and Validity of a Linear Composite in the Factor Analysis Model for Continuous Item Responses

    ERIC Educational Resources Information Center

    Ferrando, Pere J.

    2008-01-01

    This paper develops results and procedures for obtaining linear composites of factor scores that maximize: (a) test information, and (b) validity with respect to external variables in the multiple factor analysis (FA) model. I treat FA as a multidimensional item response theory model, and use Ackerman's multidimensional information approach based…

  12. Protograph based LDPC codes with minimum distance linearly growing with block size

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush; Jones, Christopher; Dolinar, Sam; Thorpe, Jeremy

    2005-01-01

    We propose several LDPC code constructions that simultaneously achieve good threshold and error floor performance. Minimum distance is shown to grow linearly with block size (similar to regular codes of variable degree at least 3) by considering ensemble average weight enumerators. Our constructions are based on projected graph, or protograph, structures that support high-speed decoder implementations. As with irregular ensembles, our constructions are sensitive to the proportion of degree-2 variable nodes. A code with too few such nodes tends to have an iterative decoding threshold that is far from the capacity threshold. A code with too many such nodes tends to not exhibit a minimum distance that grows linearly in block length. In this paper we also show that precoding can be used to lower the threshold of regular LDPC codes. The decoding thresholds of the proposed codes, which have linearly increasing minimum distance in block size, outperform that of regular LDPC codes. Furthermore, a family of low to high rate codes, with thresholds that adhere closely to their respective channel capacity thresholds, is presented. Simulation results for a few example codes show that the proposed codes have low error floors as well as good threshold SNFt performance.

  13. Continuous-variable phase estimation with unitary and random linear disturbance

    NASA Astrophysics Data System (ADS)

    Delgado de Souza, Douglas; Genoni, Marco G.; Kim, M. S.

    2014-10-01

    We address the problem of continuous-variable quantum phase estimation in the presence of linear disturbance at the Hamiltonian level by means of Gaussian probe states. In particular we discuss both unitary and random disturbance by considering the parameter which characterizes the unwanted linear term present in the Hamiltonian as fixed (unitary disturbance) or random with a given probability distribution (random disturbance). We derive the optimal input Gaussian states at fixed energy, maximizing the quantum Fisher information over the squeezing angle and the squeezing energy fraction, and we discuss the scaling of the quantum Fisher information in terms of the output number of photons, nout. We observe that, in the case of unitary disturbance, the optimal state is a squeezed vacuum state and the quadratic scaling is conserved. As regards the random disturbance, we observe that the optimal squeezing fraction may not be equal to one and, for any nonzero value of the noise parameter, the quantum Fisher information scales linearly with the average number of photons. Finally, we discuss the performance of homodyne measurement by comparing the achievable precision with the ultimate limit imposed by the quantum Cramér-Rao bound.

  14. Can a minimalist model of wind forced baroclinic Rossby waves produce reasonable results?

    NASA Astrophysics Data System (ADS)

    Watanabe, Wandrey B.; Polito, Paulo S.; da Silveira, Ilson C. A.

    2016-04-01

    The linear theory predicts that Rossby waves are the large scale mechanism of adjustment to perturbations of the geophysical fluid. Satellite measurements of sea level anomaly (SLA) provided sturdy evidence of the existence of these waves. Recent studies suggest that the variability in the altimeter records is mostly due to mesoscale nonlinear eddies and challenges the original interpretation of westward propagating features as Rossby waves. The objective of this work is to test whether a classic linear dynamic model is a reasonable explanation for the observed SLA. A linear-reduced gravity non-dispersive Rossby wave model is used to estimate the SLA forced by direct and remote wind stress. Correlations between model results and observations are up to 0.88. The best agreement is in the tropical region of all ocean basins. These correlations decrease towards insignificance in mid-latitudes. The relative contributions of eastern boundary (remote) forcing and local wind forcing in the generation of Rossby waves are also estimated and suggest that the main wave forming mechanism is the remote forcing. Results suggest that linear long baroclinic Rossby wave dynamics explain a significant part of the SLA annual variability at least in the tropical oceans.

  15. Discrete-time BAM neural networks with variable delays

    NASA Astrophysics Data System (ADS)

    Liu, Xin-Ge; Tang, Mei-Lan; Martin, Ralph; Liu, Xin-Bi

    2007-07-01

    This Letter deals with the global exponential stability of discrete-time bidirectional associative memory (BAM) neural networks with variable delays. Using a Lyapunov functional, and linear matrix inequality techniques (LMI), we derive a new delay-dependent exponential stability criterion for BAM neural networks with variable delays. As this criterion has no extra constraints on the variable delay functions, it can be applied to quite general BAM neural networks with a broad range of time delay functions. It is also easy to use in practice. An example is provided to illustrate the theoretical development.

  16. Study of process variables associated with manufacturing hermetically-sealed nickel-cadmium cells

    NASA Technical Reports Server (NTRS)

    Miller, L.; Doan, D. J.; Carr, E. S.

    1971-01-01

    A program to determine and study the critical process variables associated with the manufacture of aerospace, hermetically-sealed, nickel-cadmium cells is described. The determination and study of the process variables associated with the positive and negative plaque impregnation/polarization process are emphasized. The experimental data resulting from the implementation of fractional factorial design experiments are analyzed by means of a linear multiple regression analysis technique. This analysis permits the selection of preferred levels for certain process variables to achieve desirable impregnated plaque characteristics.

  17. Improved modeling of clinical data with kernel methods.

    PubMed

    Daemen, Anneleen; Timmerman, Dirk; Van den Bosch, Thierry; Bottomley, Cecilia; Kirk, Emma; Van Holsbeke, Caroline; Valentin, Lil; Bourne, Tom; De Moor, Bart

    2012-02-01

    Despite the rise of high-throughput technologies, clinical data such as age, gender and medical history guide clinical management for most diseases and examinations. To improve clinical management, available patient information should be fully exploited. This requires appropriate modeling of relevant parameters. When kernel methods are used, traditional kernel functions such as the linear kernel are often applied to the set of clinical parameters. These kernel functions, however, have their disadvantages due to the specific characteristics of clinical data, being a mix of variable types with each variable its own range. We propose a new kernel function specifically adapted to the characteristics of clinical data. The clinical kernel function provides a better representation of patients' similarity by equalizing the influence of all variables and taking into account the range r of the variables. Moreover, it is robust with respect to changes in r. Incorporated in a least squares support vector machine, the new kernel function results in significantly improved diagnosis, prognosis and prediction of therapy response. This is illustrated on four clinical data sets within gynecology, with an average increase in test area under the ROC curve (AUC) of 0.023, 0.021, 0.122 and 0.019, respectively. Moreover, when combining clinical parameters and expression data in three case studies on breast cancer, results improved overall with use of the new kernel function and when considering both data types in a weighted fashion, with a larger weight assigned to the clinical parameters. The increase in AUC with respect to a standard kernel function and/or unweighted data combination was maximum 0.127, 0.042 and 0.118 for the three case studies. For clinical data consisting of variables of different types, the proposed kernel function--which takes into account the type and range of each variable--has shown to be a better alternative for linear and non-linear classification problems. Copyright © 2011 Elsevier B.V. All rights reserved.

  18. Using state variables to model the response of tumour cells to radiation and heat: a novel multi-hit-repair approach.

    PubMed

    Scheidegger, Stephan; Fuchs, Hans U; Zaugg, Kathrin; Bodis, Stephan; Füchslin, Rudolf M

    2013-01-01

    In order to overcome the limitations of the linear-quadratic model and include synergistic effects of heat and radiation, a novel radiobiological model is proposed. The model is based on a chain of cell populations which are characterized by the number of radiation induced damages (hits). Cells can shift downward along the chain by collecting hits and upward by a repair process. The repair process is governed by a repair probability which depends upon state variables used for a simplistic description of the impact of heat and radiation upon repair proteins. Based on the parameters used, populations up to 4-5 hits are relevant for the calculation of the survival. The model describes intuitively the mathematical behaviour of apoptotic and nonapoptotic cell death. Linear-quadratic-linear behaviour of the logarithmic cell survival, fractionation, and (with one exception) the dose rate dependencies are described correctly. The model covers the time gap dependence of the synergistic cell killing due to combined application of heat and radiation, but further validation of the proposed approach based on experimental data is needed. However, the model offers a work bench for testing different biological concepts of damage induction, repair, and statistical approaches for calculating the variables of state.

  19. Documentation of computer program VS2D to solve the equations of fluid flow in variably saturated porous media

    USGS Publications Warehouse

    Lappala, E.G.; Healy, R.W.; Weeks, E.P.

    1987-01-01

    This report documents FORTRAN computer code for solving problems involving variably saturated single-phase flow in porous media. The flow equation is written with total hydraulic potential as the dependent variable, which allows straightforward treatment of both saturated and unsaturated conditions. The spatial derivatives in the flow equation are approximated by central differences, and time derivatives are approximated either by a fully implicit backward or by a centered-difference scheme. Nonlinear conductance and storage terms may be linearized using either an explicit method or an implicit Newton-Raphson method. Relative hydraulic conductivity is evaluated at cell boundaries by using either full upstream weighting, the arithmetic mean, or the geometric mean of values from adjacent cells. Nonlinear boundary conditions treated by the code include infiltration, evaporation, and seepage faces. Extraction by plant roots that is caused by atmospheric demand is included as a nonlinear sink term. These nonlinear boundary and sink terms are linearized implicitly. The code has been verified for several one-dimensional linear problems for which analytical solutions exist and against two nonlinear problems that have been simulated with other numerical models. A complete listing of data-entry requirements and data entry and results for three example problems are provided. (USGS)

  20. Dynamics of one-dimensional self-gravitating systems using Hermite-Legendre polynomials

    NASA Astrophysics Data System (ADS)

    Barnes, Eric I.; Ragan, Robert J.

    2014-01-01

    The current paradigm for understanding galaxy formation in the Universe depends on the existence of self-gravitating collisionless dark matter. Modelling such dark matter systems has been a major focus of astrophysicists, with much of that effort directed at computational techniques. Not surprisingly, a comprehensive understanding of the evolution of these self-gravitating systems still eludes us, since it involves the collective non-linear dynamics of many particle systems interacting via long-range forces described by the Vlasov equation. As a step towards developing a clearer picture of collisionless self-gravitating relaxation, we analyse the linearized dynamics of isolated one-dimensional systems near thermal equilibrium by expanding their phase-space distribution functions f(x, v) in terms of Hermite functions in the velocity variable, and Legendre functions involving the position variable. This approach produces a picture of phase-space evolution in terms of expansion coefficients, rather than spatial and velocity variables. We obtain equations of motion for the expansion coefficients for both test-particle distributions and self-gravitating linear perturbations of thermal equilibrium. N-body simulations of perturbed equilibria are performed and found to be in excellent agreement with the expansion coefficient approach over a time duration that depends on the size of the expansion series used.

  1. Compact characterization of liquid absorption and emission spectra using linear variable filters integrated with a CMOS imaging camera.

    PubMed

    Wan, Yuhang; Carlson, John A; Kesler, Benjamin A; Peng, Wang; Su, Patrick; Al-Mulla, Saoud A; Lim, Sung Jun; Smith, Andrew M; Dallesasse, John M; Cunningham, Brian T

    2016-07-08

    A compact analysis platform for detecting liquid absorption and emission spectra using a set of optical linear variable filters atop a CMOS image sensor is presented. The working spectral range of the analysis platform can be extended without a reduction in spectral resolution by utilizing multiple linear variable filters with different wavelength ranges on the same CMOS sensor. With optical setup reconfiguration, its capability to measure both absorption and fluorescence emission is demonstrated. Quantitative detection of fluorescence emission down to 0.28 nM for quantum dot dispersions and 32 ng/mL for near-infrared dyes has been demonstrated on a single platform over a wide spectral range, as well as an absorption-based water quality test, showing the versatility of the system across liquid solutions for different emission and absorption bands. Comparison with a commercially available portable spectrometer and an optical spectrum analyzer shows our system has an improved signal-to-noise ratio and acceptable spectral resolution for discrimination of emission spectra, and characterization of colored liquid's absorption characteristics generated by common biomolecular assays. This simple, compact, and versatile analysis platform demonstrates a path towards an integrated optical device that can be utilized for a wide variety of applications in point-of-use testing and point-of-care diagnostics.

  2. Tools to identify linear combination of prognostic factors which maximizes area under receiver operator curve.

    PubMed

    Todor, Nicolae; Todor, Irina; Săplăcan, Gavril

    2014-01-01

    The linear combination of variables is an attractive method in many medical analyses targeting a score to classify patients. In the case of ROC curves the most popular problem is to identify the linear combination which maximizes area under curve (AUC). This problem is complete closed when normality assumptions are met. With no assumption of normality search algorithm are avoided because it is accepted that we have to evaluate AUC n(d) times where n is the number of distinct observation and d is the number of variables. For d = 2, using particularities of AUC formula, we described an algorithm which lowered the number of evaluations of AUC from n(2) to n(n-1) + 1. For d > 2 our proposed solution is an approximate method by considering equidistant points on the unit sphere in R(d) where we evaluate AUC. The algorithms were applied to data from our lab to predict response of treatment by a set of molecular markers in cervical cancers patients. In order to evaluate the strength of our algorithms a simulation was added. In the case of no normality presented algorithms are feasible. For many variables computation time could be increased but acceptable.

  3. AUTONOMIC CONTROL OF HEART RATE AFTER EXERCISE IN TRAINED WRESTLERS

    PubMed Central

    Báez, San Martín E.; Von Oetinger, A.; Cañas, Jamett R.; Ramírez, Campillo R.

    2013-01-01

    The objective of this study was to establish differences in vagal reactivation, through heart rate recovery and heart rate variability post exercise, in Brazilian jiu-jitsu wrestlers (BJJW). A total of 18 male athletes were evaluated, ten highly trained (HT) and eight moderately trained (MT), who performed a maximum incremental test. At the end of the exercise, the R-R intervals were recorded during the first minute of recovery. We calculated heart rate recovery (HRR60s), and performed linear and non-linear (standard deviation of instantaneous beat-to-beat R-R interval variability – SD1) analysis of heart rate variability (HRV), using the tachogram of the first minute of recovery divided into four segments of 15 s each (0-15 s, 15-30 s, 30-45 s, 45-60 s). Between HT and MT individuals, there were statistically significant differences in HRR60s (p <0.05) and in the non linear analysis of HRV from SD130-45s (p <0.05) and SD145-60s (p <0.05). The results of this research suggest that heart rate kinetics during the first minute after exercise are related to training level and can be used as an index for autonomic cardiovascular control in BJJW. PMID:24744476

  4. Post-processing through linear regression

    NASA Astrophysics Data System (ADS)

    van Schaeybroeck, B.; Vannitsem, S.

    2011-03-01

    Various post-processing techniques are compared for both deterministic and ensemble forecasts, all based on linear regression between forecast data and observations. In order to evaluate the quality of the regression methods, three criteria are proposed, related to the effective correction of forecast error, the optimal variability of the corrected forecast and multicollinearity. The regression schemes under consideration include the ordinary least-square (OLS) method, a new time-dependent Tikhonov regularization (TDTR) method, the total least-square method, a new geometric-mean regression (GM), a recently introduced error-in-variables (EVMOS) method and, finally, a "best member" OLS method. The advantages and drawbacks of each method are clarified. These techniques are applied in the context of the 63 Lorenz system, whose model version is affected by both initial condition and model errors. For short forecast lead times, the number and choice of predictors plays an important role. Contrarily to the other techniques, GM degrades when the number of predictors increases. At intermediate lead times, linear regression is unable to provide corrections to the forecast and can sometimes degrade the performance (GM and the best member OLS with noise). At long lead times the regression schemes (EVMOS, TDTR) which yield the correct variability and the largest correlation between ensemble error and spread, should be preferred.

  5. Autonomic control of heart rate after exercise in trained wrestlers.

    PubMed

    Henríquez, Olguín C; Báez, San Martín E; Von Oetinger, A; Cañas, Jamett R; Ramírez, Campillo R

    2013-06-01

    The objective of this study was to establish differences in vagal reactivation, through heart rate recovery and heart rate variability post exercise, in Brazilian jiu-jitsu wrestlers (BJJW). A total of 18 male athletes were evaluated, ten highly trained (HT) and eight moderately trained (MT), who performed a maximum incremental test. At the end of the exercise, the R-R intervals were recorded during the first minute of recovery. We calculated heart rate recovery (HRR60s), and performed linear and non-linear (standard deviation of instantaneous beat-to-beat R-R interval variability - SD1) analysis of heart rate variability (HRV), using the tachogram of the first minute of recovery divided into four segments of 15 s each (0-15 s, 15-30 s, 30-45 s, 45-60 s). Between HT and MT individuals, there were statistically significant differences in HRR60s (p <0.05) and in the non linear analysis of HRV from SD130-45s (p <0.05) and SD145-60s (p <0.05). The results of this research suggest that heart rate kinetics during the first minute after exercise are related to training level and can be used as an index for autonomic cardiovascular control in BJJW.

  6. Moment-to-Moment BOLD Signal Variability Reflects Regional Changes in Neural Flexibility across the Lifespan.

    PubMed

    Nomi, Jason S; Bolt, Taylor S; Ezie, C E Chiemeka; Uddin, Lucina Q; Heller, Aaron S

    2017-05-31

    Variability of neuronal responses is thought to underlie flexible and optimal brain function. Because previous work investigating BOLD signal variability has been conducted within task-based fMRI contexts on adults and older individuals, very little is currently known regarding regional changes in spontaneous BOLD signal variability in the human brain across the lifespan. The current study used resting-state fMRI data from a large sample of male and female human participants covering a wide age range (6-85 years) across two different fMRI acquisition parameters (TR = 0.645 and 1.4 s). Variability in brain regions including a key node of the salience network (anterior insula) increased linearly across the lifespan across datasets. In contrast, variability in most other large-scale networks decreased linearly over the lifespan. These results demonstrate unique lifespan trajectories of BOLD variability related to specific regions of the brain and add to a growing literature demonstrating the importance of identifying normative trajectories of functional brain maturation. SIGNIFICANCE STATEMENT Although brain signal variability has traditionally been considered a source of unwanted noise, recent work demonstrates that variability in brain signals during task performance is related to brain maturation in old age as well as individual differences in behavioral performance. The current results demonstrate that intrinsic fluctuations in resting-state variability exhibit unique maturation trajectories in specific brain regions and systems, particularly those supporting salience detection. These results have implications for investigations of brain development and aging, as well as interpretations of brain function underlying behavioral changes across the lifespan. Copyright © 2017 the authors 0270-6474/17/375539-10$15.00/0.

  7. Development of reaching during mid-childhood from a Developmental Systems perspective.

    PubMed

    Golenia, Laura; Schoemaker, Marina M; Otten, Egbert; Mouton, Leonora J; Bongers, Raoul M

    2018-01-01

    Inspired by the Developmental Systems perspective, we studied the development of reaching during mid-childhood (5-10 years of age) not just at the performance level (i.e., endpoint movements), as commonly done in earlier studies, but also at the joint angle level. Because the endpoint position (i.e., the tip of the index finger) at the reaching target can be achieved with multiple joint angle combinations, we partitioned variability in joint angles over trials into variability that does not (goal-equivalent variability, GEV) and that does (non-goal-equivalent variability, NGEV) influence the endpoint position, using the Uncontrolled Manifold method. Quantifying this structure in joint angle variability allowed us to examine whether and how spatial variability of the endpoint at the reaching target is related to variability in joint angles and how this changes over development. 6-, 8- and 10-year-old children and young adults performed reaching movements to a target with the index finger. Polynomial trend analysis revealed a linear and a quadratic decreasing trend for the variable error. Linear decreasing and cubic trends were found for joint angle standard deviations at movement end. GEV and NGEV decreased gradually with age, but interestingly, the decrease of GEV was steeper than the decrease of NGEV, showing that the different parts of the joint angle variability changed differently over age. We interpreted these changes in the structure of variability as indicating changes over age in exploration for synergies (a family of task solutions), a concept that links the performance level with the joint angle level. Our results suggest changes in the search for synergies during mid-childhood development.

  8. Hybrid Discrete-Continuous Markov Decision Processes

    NASA Technical Reports Server (NTRS)

    Feng, Zhengzhu; Dearden, Richard; Meuleau, Nicholas; Washington, Rich

    2003-01-01

    This paper proposes a Markov decision process (MDP) model that features both discrete and continuous state variables. We extend previous work by Boyan and Littman on the mono-dimensional time-dependent MDP to multiple dimensions. We present the principle of lazy discretization, and piecewise constant and linear approximations of the model. Having to deal with several continuous dimensions raises several new problems that require new solutions. In the (piecewise) linear case, we use techniques from partially- observable MDPs (POMDPS) to represent value functions as sets of linear functions attached to different partitions of the state space.

  9. Simple quasi-analytical holonomic homogenization model for the non-linear analysis of in-plane loaded masonry panels: Part 1, meso-scale

    NASA Astrophysics Data System (ADS)

    Milani, G.; Bertolesi, E.

    2017-07-01

    A simple quasi analytical holonomic homogenization approach for the non-linear analysis of masonry walls in-plane loaded is presented. The elementary cell (REV) is discretized with 24 triangular elastic constant stress elements (bricks) and non-linear interfaces (mortar). A holonomic behavior with softening is assumed for mortar. It is shown how the mechanical problem in the unit cell is characterized by very few displacement variables and how homogenized stress-strain behavior can be evaluated semi-analytically.

  10. Simultaneous linear and circular polarization observations of blazars 3C 66A, OJ 287 and Markarian 421

    NASA Astrophysics Data System (ADS)

    Takalo, Leo O.; Sillanpaa, Aimo

    1993-08-01

    We present the first ever simultaneous optical linear and circular polarization observations of blazars. These polarizations have been measured simultaneously in UBVRI-bands in three blazars; 3C 66A, OJ 287 and Markarian 421. Measured linear polarization in 3C 66A was the largest ever observed, at PR equals 33.1 plus/minus .5%. In 3C 66A we detected small circular polarization on the other bands, except U. In OJ 287 we detected variable circular polarization in the U-band.

  11. An outflow boundary condition for aeroacoustic computations

    NASA Technical Reports Server (NTRS)

    Hayder, M. Ehtesham; Hagstrom, Thomas

    1995-01-01

    A formulation of boundary condition for flows with small disturbances is presented. The authors test their methodology in an axisymmetric jet flow calculation, using both the Navier-Stokes and Euler equations. Solutions in the far field are assumed to be oscillatory. If the oscillatory disturbances are small, the growth of the solution variables can be predicted by linear theory. Eigenfunctions of the linear theory are used explicitly in the formulation of the boundary conditions. This guarantees correct solutions at the boundary in the limit where the predictions of linear theory are valid.

  12. Evaluating Feynman integrals by the hypergeometry

    NASA Astrophysics Data System (ADS)

    Feng, Tai-Fu; Chang, Chao-Hsi; Chen, Jian-Bin; Gu, Zhi-Hua; Zhang, Hai-Bin

    2018-02-01

    The hypergeometric function method naturally provides the analytic expressions of scalar integrals from concerned Feynman diagrams in some connected regions of independent kinematic variables, also presents the systems of homogeneous linear partial differential equations satisfied by the corresponding scalar integrals. Taking examples of the one-loop B0 and massless C0 functions, as well as the scalar integrals of two-loop vacuum and sunset diagrams, we verify our expressions coinciding with the well-known results of literatures. Based on the multiple hypergeometric functions of independent kinematic variables, the systems of homogeneous linear partial differential equations satisfied by the mentioned scalar integrals are established. Using the calculus of variations, one recognizes the system of linear partial differential equations as stationary conditions of a functional under some given restrictions, which is the cornerstone to perform the continuation of the scalar integrals to whole kinematic domains numerically with the finite element methods. In principle this method can be used to evaluate the scalar integrals of any Feynman diagrams.

  13. Modeling Pan Evaporation for Kuwait by Multiple Linear Regression

    PubMed Central

    Almedeij, Jaber

    2012-01-01

    Evaporation is an important parameter for many projects related to hydrology and water resources systems. This paper constitutes the first study conducted in Kuwait to obtain empirical relations for the estimation of daily and monthly pan evaporation as functions of available meteorological data of temperature, relative humidity, and wind speed. The data used here for the modeling are daily measurements of substantial continuity coverage, within a period of 17 years between January 1993 and December 2009, which can be considered representative of the desert climate of the urban zone of the country. Multiple linear regression technique is used with a procedure of variable selection for fitting the best model forms. The correlations of evaporation with temperature and relative humidity are also transformed in order to linearize the existing curvilinear patterns of the data by using power and exponential functions, respectively. The evaporation models suggested with the best variable combinations were shown to produce results that are in a reasonable agreement with observation values. PMID:23226984

  14. Trends in non-stationary signal processing techniques applied to vibration analysis of wind turbine drive train - A contemporary survey

    NASA Astrophysics Data System (ADS)

    Uma Maheswari, R.; Umamaheswari, R.

    2017-02-01

    Condition Monitoring System (CMS) substantiates potential economic benefits and enables prognostic maintenance in wind turbine-generator failure prevention. Vibration Monitoring and Analysis is a powerful tool in drive train CMS, which enables the early detection of impending failure/damage. In variable speed drives such as wind turbine-generator drive trains, the vibration signal acquired is of non-stationary and non-linear. The traditional stationary signal processing techniques are inefficient to diagnose the machine faults in time varying conditions. The current research trend in CMS for drive-train focuses on developing/improving non-linear, non-stationary feature extraction and fault classification algorithms to improve fault detection/prediction sensitivity and selectivity and thereby reducing the misdetection and false alarm rates. In literature, review of stationary signal processing algorithms employed in vibration analysis is done at great extent. In this paper, an attempt is made to review the recent research advances in non-linear non-stationary signal processing algorithms particularly suited for variable speed wind turbines.

  15. A Kernel Embedding-Based Approach for Nonstationary Causal Model Inference.

    PubMed

    Hu, Shoubo; Chen, Zhitang; Chan, Laiwan

    2018-05-01

    Although nonstationary data are more common in the real world, most existing causal discovery methods do not take nonstationarity into consideration. In this letter, we propose a kernel embedding-based approach, ENCI, for nonstationary causal model inference where data are collected from multiple domains with varying distributions. In ENCI, we transform the complicated relation of a cause-effect pair into a linear model of variables of which observations correspond to the kernel embeddings of the cause-and-effect distributions in different domains. In this way, we are able to estimate the causal direction by exploiting the causal asymmetry of the transformed linear model. Furthermore, we extend ENCI to causal graph discovery for multiple variables by transforming the relations among them into a linear nongaussian acyclic model. We show that by exploiting the nonstationarity of distributions, both cause-effect pairs and two kinds of causal graphs are identifiable under mild conditions. Experiments on synthetic and real-world data are conducted to justify the efficacy of ENCI over major existing methods.

  16. Systemic oxidative stress associated with the neurological diseases of aging.

    PubMed

    Serra, Jorge A; Domínguez, Raúl O; Marschoff, Enrique R; Guareschi, Eduardo M; Famulari, Arturo L; Boveris, Alberto

    2009-12-01

    Markers of oxidative stress were measured in blood samples of 338 subjects (965 observations): Alzheimer's, vascular dementia, diabetes (type II) superimposed to dementias, Parkinson's disease and controls. Patients showed increased thiobarbituric acid reactive substances (+21%; P < 0.05), copper-zinc superoxide dismutase (+64%; P < 0.001) and decreased antioxidant capacity (-28%; P < 0.001); pairs of variables resulted linearly related across groups (P < 0.001). Catalase and glutathione peroxidase, involved in discrimination between diseases, resulted non-significant. When diabetes is superimposed with dementias, changes resulted less marked but significant. Also, superoxide dismutase resulted not linearly correlated with any other variable or age-related (pure Alzheimer's peaks at 70 years, P < 0.001). Systemic oxidative stress was significantly associated (P < 0.001) with all diseases indicating a disbalance in peripheral/adaptive responses to oxidative disorders through different free radical metabolic pathways. While other changes - methionine cycle, insulin correlation - are also associated with dementias, the responses presented here show a simple linear relation between prooxidants and antioxidant defenses.

  17. Morphometric study of third-instar larvae from five morphotypes of the Anastrepha fraterculus cryptic species complex (Diptera, Tephritidae)

    PubMed Central

    Canal, Nelson A.; Hernández-Ortiz, Vicente; Salas, Juan O. Tigrero; Selivon, Denise

    2015-01-01

    Abstract The occurrence of cryptic species among economically important fruit flies strongly affects the development of management tactics for these pests. Tools for studying cryptic species not only facilitate evolutionary and systematic studies, but they also provide support for fruit fly management and quarantine activities. Previous studies have shown that the South American fruit fly, Anastrepha fraterculus, is a complex of cryptic species, but few studies have been performed on the morphology of its immature stages. An analysis of mandible shape and linear morphometric variability was applied to third-instar larvae of five morphotypes of the Anastrepha fraterculus complex: Mexican, Andean, Ecuadorian, Peruvian and Brazilian-1. Outline geometric morphometry was used to study the mouth hook shape and linear morphometry analysis was performed using 24 linear measurements of the body, cephalopharyngeal skeleton, mouth hook and hypopharyngeal sclerite. Different morphotypes were grouped accurately using canonical discriminant analyses of both the geometric and linear morphometry. The shape of the mandible differed among the morphotypes, and the anterior spiracle length, number of tubules of the anterior spiracle, length and height of the mouth hook and length of the cephalopharyngeal skeleton were the most significant variables in the linear morphometric analysis. Third-instar larvae provide useful characters for studies of cryptic species in the Anastrepha fraterculus complex. PMID:26798253

  18. Applying SDDP to very large hydro-economic models with a simplified formulation for irrigation: the case of the Tigris-Euphrates river basin.

    NASA Astrophysics Data System (ADS)

    Rougé, Charles; Tilmant, Amaury

    2015-04-01

    Stochastic dual dynamic programming (SDDP) is an optimization algorithm well-suited for the study of large-scale water resources systems comprising reservoirs - and hydropower plants - as well as irrigation nodes. It generates intertemporal allocation policies that balance the present and future marginal value of water while taking into account hydrological uncertainty. It is scalable, in the sense that the time and memory required for computation do not grow exponentially with the number of state variables. Still, this scalability relies on the sampling of a few relevant trajectories for the system, and the approximation of the future value of water through cuts -i.e., hyperplanes - at points along these trajectories. Therefore, the accuracy of this approximation arguably decreases as the number of state variables increases, and it is important not to have more than necessary. In previous formulations, SDDP had three types of state variables, namely storage in each reservoir, inflow at each node and water accumulated during the irrigation season for each crop at each node. We present a simplified formulation for irrigation that does not require using the latter type of state variable. It also requires only two decision variables for each irrigation site, where the previous formulation had four per crop - and there may be several crops at the same site. This reduction in decision variables effectively reduces computation time, since SDDP decomposes the stochastic, multiperiodic, non-linear maximization problem into a series of linear ones. The proposed formulation, while computationally simpler, is mathematically equivalent to the previous one, and therefore the model gives the same results. A corollary of this formulation is that marginal utility of water at an irrigation site is effectively related to consumption at that site, through a piecewise linear function representing the net benefits from irrigation. Last but not least, the proposed formulation can be extended to any type of consumptive use of water beyond irrigation, e.g., municipal, industrial, etc This slightly different version of SDDP is applied to a large portion of the Tigris-Euphrates river basin. It comprises 24 state variables representing storage in reservoirs, 28 hydrologic state variables, and 51 demand nodes. It is the largest yet to simultaneously consider hydropower and irrigation within the same river system, and the proposed formulation almost halves the number of state variables to be considered.

  19. Peripheral refraction profiles in subjects with low foveal refractive errors.

    PubMed

    Tabernero, Juan; Ohlendorf, Arne; Fischer, M Dominik; Bruckmann, Anna R; Schiefer, Ulrich; Schaeffel, Frank

    2011-03-01

    To study the variability of peripheral refraction in a population of 43 subjects with low foveal refractive errors. A scan of the refractive error in the vertical pupil meridian of the right eye of 43 subjects (age range, 18 to 80 years, foveal spherical equivalent, < ± 2.5 diopter) over the central ± 45° of the visual field was performed using a recently developed angular scanning photorefractor. Refraction profiles across the visual field were fitted with four different models: (1) "flat model" (refractions about constant across the visual field), (2) "parabolic model" (refractions follow about a parabolic function), (3) "bi-linear model" (linear change of refractions with eccentricity from the fovea to the periphery), and (4) "box model" ("flat" central area with a linear change in refraction from a certain peripheral angle). Based on the minimal residuals of each fit, the subjects were classified into one of the four models. The "box model" accurately described the peripheral refractions in about 50% of the subjects. Peripheral refractions in six subjects were better characterized by a "linear model," in eight subjects by a "flat model," and in eight by the "parabolic model." Even after assignment to one of the models, the variability remained strikingly large, ranging from -0.75 to 6 diopter in the temporal retina at 45° eccentricity. The most common peripheral refraction profile (observed in nearly 50% of our population) was best described by the "box model." The high variability among subjects may limit attempts to reduce myopia progression with a uniform lens design and may rather call for a customized approach.

  20. Pacific decadal variability in the view of linear equatorial wave theory

    NASA Astrophysics Data System (ADS)

    Emile-Geay, J. B.; Cane, M. A.

    2006-12-01

    It has recently been proposed, within the framework of the linear shallow water equations, that tropical Pacific decadal variability can be accounted for by basin modes with eigenperiods of 10 to 20 years, amplifying a mid- latitude wind forcing with an essentially white spectrum (Cessi and Louazel 2001; Liu 2003). We question this idea here, using a different formalism of linear equatorial wave theory. We compute the Green's function for the wind forced response of a linear equatorial shallow water ocean, and use the results of Cane and Moore (1981) to obtain a compact, closed form expression for the motion of the equatorial thermocline, which applies to all frequencies lower than seasonal. At very low frequencies (decadal timescales), we recover the planetary geostrophic solution used by Cessi and Louazel (2001), as well as the equatorial wave solution of Liu (2003), and give a formal explanation for this convergence. Using this more general solution to explore more realistic wind forcings, we come to a different interpretation of the results. We find that the equatorial thermocline is inherently more sensitive to local than to remote wind forcing, and that planetary Rossby modes only weakly alter the spectral characteristics of the response. Tropical winds are able to generate a strong equatorial response with periods of 10 to 20 years, while midlatitude winds can only do so for periods longer than about 50 years. Since the decadal pattern of observed winds shows similar amplitude for tropical and midlatitude winds, we conclude that the latter are unlikely to be responsible for the observed decadal tropical Pacific SST variability. References : Cane, M. A., and Moore, D. W., 1981: A note on low-frequency equatorial basin modes. J. Phys. Oceanogr., 11(11), 1578 1584. Cessi, P., and Louazel, S., 2001: Decadal oceanic response to stochastic wind forcing. J. Phys. Oceanogr., 31, 3020 3029. Liu, Z., 2003: Tropical ocean decadal variability and resonance of planetary wave basin modes. J. Clim., 16(18), 1539 1550.

  1. Investigating the Implications of a Variable RBE on Proton Dose Fractionation Across a Clinical Pencil Beam Scanned Spread-Out Bragg Peak

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marshall, Thomas I.; Chaudhary, Pankaj; Michaelidesová, Anna

    2016-05-01

    Purpose: To investigate the clinical implications of a variable relative biological effectiveness (RBE) on proton dose fractionation. Using acute exposures, the current clinical adoption of a generic, constant cell killing RBE has been shown to underestimate the effect of the sharp increase in linear energy transfer (LET) in the distal regions of the spread-out Bragg peak (SOBP). However, experimental data for the impact of dose fractionation in such scenarios are still limited. Methods and Materials: Human fibroblasts (AG01522) at 4 key depth positions on a clinical SOBP of maximum energy 219.65 MeV were subjected to various fractionation regimens with an interfractionmore » period of 24 hours at Proton Therapy Center in Prague, Czech Republic. Cell killing RBE variations were measured using standard clonogenic assays and were further validated using Monte Carlo simulations and parameterized using a linear quadratic formalism. Results: Significant variations in the cell killing RBE for fractionated exposures along the proton dose profile were observed. RBE increased sharply toward the distal position, corresponding to a reduction in cell sparing effectiveness of fractionated proton exposures at higher LET. The effect was more pronounced at smaller doses per fraction. Experimental survival fractions were adequately predicted using a linear quadratic formalism assuming full repair between fractions. Data were also used to validate a parameterized variable RBE model based on linear α parameter response with LET that showed considerable deviations from clinically predicted isoeffective fractionation regimens. Conclusions: The RBE-weighted absorbed dose calculated using the clinically adopted generic RBE of 1.1 significantly underestimates the biological effective dose from variable RBE, particularly in fractionation regimens with low doses per fraction. Coupled with an increase in effective range in fractionated exposures, our study provides an RBE dataset that can be used by the modeling community for the optimization of fractionated proton therapy.« less

  2. Variable selection in near-infrared spectroscopy: benchmarking of feature selection methods on biodiesel data.

    PubMed

    Balabin, Roman M; Smirnov, Sergey V

    2011-04-29

    During the past several years, near-infrared (near-IR/NIR) spectroscopy has increasingly been adopted as an analytical tool in various fields from petroleum to biomedical sectors. The NIR spectrum (above 4000 cm(-1)) of a sample is typically measured by modern instruments at a few hundred of wavelengths. Recently, considerable effort has been directed towards developing procedures to identify variables (wavelengths) that contribute useful information. Variable selection (VS) or feature selection, also called frequency selection or wavelength selection, is a critical step in data analysis for vibrational spectroscopy (infrared, Raman, or NIRS). In this paper, we compare the performance of 16 different feature selection methods for the prediction of properties of biodiesel fuel, including density, viscosity, methanol content, and water concentration. The feature selection algorithms tested include stepwise multiple linear regression (MLR-step), interval partial least squares regression (iPLS), backward iPLS (BiPLS), forward iPLS (FiPLS), moving window partial least squares regression (MWPLS), (modified) changeable size moving window partial least squares (CSMWPLS/MCSMWPLSR), searching combination moving window partial least squares (SCMWPLS), successive projections algorithm (SPA), uninformative variable elimination (UVE, including UVE-SPA), simulated annealing (SA), back-propagation artificial neural networks (BP-ANN), Kohonen artificial neural network (K-ANN), and genetic algorithms (GAs, including GA-iPLS). Two linear techniques for calibration model building, namely multiple linear regression (MLR) and partial least squares regression/projection to latent structures (PLS/PLSR), are used for the evaluation of biofuel properties. A comparison with a non-linear calibration model, artificial neural networks (ANN-MLP), is also provided. Discussion of gasoline, ethanol-gasoline (bioethanol), and diesel fuel data is presented. The results of other spectroscopic techniques application, such as Raman, ultraviolet-visible (UV-vis), or nuclear magnetic resonance (NMR) spectroscopies, can be greatly improved by an appropriate feature selection choice. Copyright © 2011 Elsevier B.V. All rights reserved.

  3. A mechanical comparison of linear and double-looped hung supplemental heavy chain resistance to the back squat: a case study.

    PubMed

    Neelly, Kurt R; Terry, Joseph G; Morris, Martin J

    2010-01-01

    A relatively new and scarcely researched technique to increase strength is the use of supplemental heavy chain resistance (SHCR) in conjunction with plate weights to provide variable resistance to free weight exercises. The purpose of this case study was to determine the actual resistance being provided by a double-looped versus a linear hung SHCR to the back squat exercise. The linear technique simply hangs the chain directly from the bar, whereas the double-looped technique uses a smaller chain to adjust the height of the looped chain. In both techniques, as the squat descends, chain weight is unloaded onto the floor, and as the squat ascends, chain weight is progressively loaded back as resistance. One experienced and trained male weight lifter (age = 33 yr; height = 1.83 m; weight = 111.4 kg) served as the subject. Plate weight was set at 84.1 kg, approximately 50% of the subject's 1 repetition maximum. The SHCR was affixed to load cells, sampling at a frequency of 500 Hz, which were affixed to the Olympic bar. Data were collected as the subject completed the back squat under the following conditions: double-looped 1 chain (9.6 kg), double-looped 2 chains (19.2 kg), linear 1 chain, and linear 2 chains. The double-looped SHCR resulted in a 78-89% unloading of the chain weight at the bottom of the squat, whereas the linear hanging SHCR resulted in only a 36-42% unloading. The double-looped technique provided nearly 2 times the variable resistance at the top of the squat compared with the linear hanging technique, showing that attention must be given to the technique used to hang SHCR.

  4. Cardiovascular impact of intravenous caffeine in preterm infants.

    PubMed

    Huvanandana, Jacqueline; Thamrin, Cindy; McEwan, Alistair L; Hinder, Murray; Tracy, Mark B

    2018-05-03

    To evaluate the acute effect of intravenous caffeine on heart rate and blood pressure variability in preterm infants. We extracted and compared linear and non-linear features of heart rate and blood pressure variability at two timepoints: prior to and in the two hours following a loading dose of 10 mg/kg caffeine base. We studied 31 preterm infants with arterial blood pressure data and 25 with electrocardiogram data, and compared extracted features prior to and following caffeine administration. We observed a reduction in both scaling exponents (α 1 , α 2 ) of mean arterial pressure from detrended fluctuation analysis and an increase in the ratio of short- (SD1) and long-term (SD2) variability from Poincare analysis (SD1/SD2). Heart rate variability analyses showed a reduction in α 1 (mean (SD) of 0.92 (0.21) to 0.86 (0.21), p < 0.01), consistent with increased vagal tone. Following caffeine, beat-to-beat pulse pressure variability (SD) also increased (2.1 (0.64) to 2.5 (0.65) mmHg, p < 0.01). This study highlights potential elevation in autonomic nervous system responsiveness following caffeine administration reflected in both heart rate and blood pressure systems. The observed increase in pulse pressure variability may have implications for caffeine administration to infants with potentially impaired cerebral autoregulation. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  5. Modeling relationships between catchment attributes and river water quality in southern catchments of the Caspian Sea.

    PubMed

    Hasani Sangani, Mohammad; Jabbarian Amiri, Bahman; Alizadeh Shabani, Afshin; Sakieh, Yousef; Ashrafi, Sohrab

    2015-04-01

    Increasing land utilization through diverse forms of human activities, such as agriculture, forestry, urban growth, and industrial development, has led to negative impacts on the water quality of rivers. To find out how catchment attributes, such as land use, hydrologic soil groups, and lithology, can affect water quality variables (Ca(2+), Mg(2+), Na(+), Cl(-), HCO 3 (-) , pH, TDS, EC, SAR), a spatio-statistical approach was applied to 23 catchments in southern basins of the Caspian Sea. All input data layers (digital maps of land use, soil, and lithology) were prepared using geographic information system (GIS) and spatial analysis. Relationships between water quality variables and catchment attributes were then examined by Spearman rank correlation tests and multiple linear regression. Stepwise approach-based multiple linear regressions were developed to examine the relationship between catchment attributes and water quality variables. The areas (%) of marl, tuff, or diorite, as well as those of good-quality rangeland and bare land had negative effects on all water quality variables, while those of basalt, forest land cover were found to contribute to improved river water quality. Moreover, lithological variables showed the greatest most potential for predicting the mean concentration values of water quality variables, and noting that measure of EC and TDS have inversely associated with area (%) of urban land use.

  6. Ordinal probability effect measures for group comparisons in multinomial cumulative link models.

    PubMed

    Agresti, Alan; Kateri, Maria

    2017-03-01

    We consider simple ordinal model-based probability effect measures for comparing distributions of two groups, adjusted for explanatory variables. An "ordinal superiority" measure summarizes the probability that an observation from one distribution falls above an independent observation from the other distribution, adjusted for explanatory variables in a model. The measure applies directly to normal linear models and to a normal latent variable model for ordinal response variables. It equals Φ(β/2) for the corresponding ordinal model that applies a probit link function to cumulative multinomial probabilities, for standard normal cdf Φ and effect β that is the coefficient of the group indicator variable. For the more general latent variable model for ordinal responses that corresponds to a linear model with other possible error distributions and corresponding link functions for cumulative multinomial probabilities, the ordinal superiority measure equals exp(β)/[1+exp(β)] with the log-log link and equals approximately exp(β/2)/[1+exp(β/2)] with the logit link, where β is the group effect. Another ordinal superiority measure generalizes the difference of proportions from binary to ordinal responses. We also present related measures directly for ordinal models for the observed response that need not assume corresponding latent response models. We present confidence intervals for the measures and illustrate with an example. © 2016, The International Biometric Society.

  7. Development and evaluation of height diameter at breast models for native Chinese Metasequoia.

    PubMed

    Liu, Mu; Feng, Zhongke; Zhang, Zhixiang; Ma, Chenghui; Wang, Mingming; Lian, Bo-Ling; Sun, Renjie; Zhang, Li

    2017-01-01

    Accurate tree height and diameter at breast height (dbh) are important input variables for growth and yield models. A total of 5503 Chinese Metasequoia trees were used in this study. We studied 53 fitted models, of which 7 were linear models and 46 were non-linear models. These models were divided into two groups of single models and multivariate models according to the number of independent variables. The results show that the allometry equation of tree height which has diameter at breast height as independent variable can better reflect the change of tree height; in addition the prediction accuracy of the multivariate composite models is higher than that of the single variable models. Although tree age is not the most important variable in the study of the relationship between tree height and dbh, the consideration of tree age when choosing models and parameters in model selection can make the prediction of tree height more accurate. The amount of data is also an important parameter what can improve the reliability of models. Other variables such as tree height, main dbh and altitude, etc can also affect models. In this study, the method of developing the recommended models for predicting the tree height of native Metasequoias aged 50-485 years is statistically reliable and can be used for reference in predicting the growth and production of mature native Metasequoia.

  8. Development and evaluation of height diameter at breast models for native Chinese Metasequoia

    PubMed Central

    Feng, Zhongke; Zhang, Zhixiang; Ma, Chenghui; Wang, Mingming; Lian, Bo-ling; Sun, Renjie; Zhang, Li

    2017-01-01

    Accurate tree height and diameter at breast height (dbh) are important input variables for growth and yield models. A total of 5503 Chinese Metasequoia trees were used in this study. We studied 53 fitted models, of which 7 were linear models and 46 were non-linear models. These models were divided into two groups of single models and multivariate models according to the number of independent variables. The results show that the allometry equation of tree height which has diameter at breast height as independent variable can better reflect the change of tree height; in addition the prediction accuracy of the multivariate composite models is higher than that of the single variable models. Although tree age is not the most important variable in the study of the relationship between tree height and dbh, the consideration of tree age when choosing models and parameters in model selection can make the prediction of tree height more accurate. The amount of data is also an important parameter what can improve the reliability of models. Other variables such as tree height, main dbh and altitude, etc can also affect models. In this study, the method of developing the recommended models for predicting the tree height of native Metasequoias aged 50–485 years is statistically reliable and can be used for reference in predicting the growth and production of mature native Metasequoia. PMID:28817600

  9. Spatial variability in plankton biomass and hydrographic variables along an axial transect in Chesapeake Bay

    NASA Astrophysics Data System (ADS)

    Zhang, X.; Roman, M.; Kimmel, D.; McGilliard, C.; Boicourt, W.

    2006-05-01

    High-resolution, axial sampling surveys were conducted in Chesapeake Bay during April, July, and October from 1996 to 2000 using a towed sampling device equipped with sensors for depth, temperature, conductivity, oxygen, fluorescence, and an optical plankton counter (OPC). The results suggest that the axial distribution and variability of hydrographic and biological parameters in Chesapeake Bay were primarily influenced by the source and magnitude of freshwater input. Bay-wide spatial trends in the water column-averaged values of salinity were linear functions of distance from the main source of freshwater, the Susquehanna River, at the head of the bay. However, spatial trends in the water column-averaged values of temperature, dissolved oxygen, chlorophyll-a and zooplankton biomass were nonlinear along the axis of the bay. Autocorrelation analysis and the residuals of linear and quadratic regressions between each variable and latitude were used to quantify the patch sizes for each axial transect. The patch sizes of each variable depended on whether the data were detrended, and the detrending techniques applied. However, the patch size of each variable was generally larger using the original data compared to the detrended data. The patch sizes of salinity were larger than those for dissolved oxygen, chlorophyll-a and zooplankton biomass, suggesting that more localized processes influence the production and consumption of plankton. This high-resolution quantification of the zooplankton spatial variability and patch size can be used for more realistic assessments of the zooplankton forage base for larval fish species.

  10. Nonparametric Bayesian Multiple Imputation for Incomplete Categorical Variables in Large-Scale Assessment Surveys

    ERIC Educational Resources Information Center

    Si, Yajuan; Reiter, Jerome P.

    2013-01-01

    In many surveys, the data comprise a large number of categorical variables that suffer from item nonresponse. Standard methods for multiple imputation, like log-linear models or sequential regression imputation, can fail to capture complex dependencies and can be difficult to implement effectively in high dimensions. We present a fully Bayesian,…

  11. Modeling Signal-Noise Processes Supports Student Construction of a Hierarchical Image of Sample

    ERIC Educational Resources Information Center

    Lehrer, Richard

    2017-01-01

    Grade 6 (modal age 11) students invented and revised models of the variability generated as each measured the perimeter of a table in their classroom. To construct models, students represented variability as a linear composite of true measure (signal) and multiple sources of random error. Students revised models by developing sampling…

  12. A statistical model for Windstorm Variability over the British Isles based on Large-scale Atmospheric and Oceanic Mechanisms

    NASA Astrophysics Data System (ADS)

    Kirchner-Bossi, Nicolas; Befort, Daniel J.; Wild, Simon B.; Ulbrich, Uwe; Leckebusch, Gregor C.

    2016-04-01

    Time-clustered winter storms are responsible for a majority of the wind-induced losses in Europe. Over last years, different atmospheric and oceanic large-scale mechanisms as the North Atlantic Oscillation (NAO) or the Meridional Overturning Circulation (MOC) have been proven to drive some significant portion of the windstorm variability over Europe. In this work we systematically investigate the influence of different large-scale natural variability modes: more than 20 indices related to those mechanisms with proven or potential influence on the windstorm frequency variability over Europe - mostly SST- or pressure-based - are derived by means of ECMWF ERA-20C reanalysis during the last century (1902-2009), and compared to the windstorm variability for the European winter (DJF). Windstorms are defined and tracked as in Leckebusch et al. (2008). The derived indices are then employed to develop a statistical procedure including a stepwise Multiple Linear Regression (MLR) and an Artificial Neural Network (ANN), aiming to hindcast the inter-annual (DJF) regional windstorm frequency variability in a case study for the British Isles. This case study reveals 13 indices with a statistically significant coupling with seasonal windstorm counts. The Scandinavian Pattern (SCA) showed the strongest correlation (0.61), followed by the NAO (0.48) and the Polar/Eurasia Pattern (0.46). The obtained indices (standard-normalised) are selected as predictors for a windstorm variability hindcast model applied for the British Isles. First, a stepwise linear regression is performed, to identify which mechanisms can explain windstorm variability best. Finally, the indices retained by the stepwise regression are used to develop a multlayer perceptron-based ANN that hindcasted seasonal windstorm frequency and clustering. Eight indices (SCA, NAO, EA, PDO, W.NAtl.SST, AMO (unsmoothed), EA/WR and Trop.N.Atl SST) are retained by the stepwise regression. Among them, SCA showed the highest linear coefficient, followed by SST in western Atlantic, AMO and NAO. The explanatory regression model (considering all time steps) provided a Coefficient of Determination (R^2) of 0.75. A predictive version of the linear model applying a leave-one-out cross-validation (LOOCV) shows an R2 of 0.56 and a relative RMSE of 4.67 counts/season. An ANN-based nonlinear hindcast model for the seasonal windstorm frequency is developed with the aim to improve the stepwise hindcast ability and thus better predict a time-clustered season over the case study. A 7 node-hidden layer perceptron is set, and the LOOCV procedure reveals a R2 of 0.71. In comparison to the stepwise MLR the RMSE is reduced a 20%. This work shows that for the British Isles case study, most of the interannual variability can be explained by certain large-scale mechanisms, considering also nonlinear effects (ANN). This allows to discern a time-clustered season from a non-clustered one - a key issue for applications e.g., in the (re)insurance industry.

  13. Mechanical System Reliability and Cost Integration Using a Sequential Linear Approximation Method

    NASA Technical Reports Server (NTRS)

    Kowal, Michael T.

    1997-01-01

    The development of new products is dependent on product designs that incorporate high levels of reliability along with a design that meets predetermined levels of system cost. Additional constraints on the product include explicit and implicit performance requirements. Existing reliability and cost prediction methods result in no direct linkage between variables affecting these two dominant product attributes. A methodology to integrate reliability and cost estimates using a sequential linear approximation method is proposed. The sequential linear approximation method utilizes probability of failure sensitivities determined from probabilistic reliability methods as well a manufacturing cost sensitivities. The application of the sequential linear approximation method to a mechanical system is demonstrated.

  14. Experimental quantum computing to solve systems of linear equations.

    PubMed

    Cai, X-D; Weedbrook, C; Su, Z-E; Chen, M-C; Gu, Mile; Zhu, M-J; Li, Li; Liu, Nai-Le; Lu, Chao-Yang; Pan, Jian-Wei

    2013-06-07

    Solving linear systems of equations is ubiquitous in all areas of science and engineering. With rapidly growing data sets, such a task can be intractable for classical computers, as the best known classical algorithms require a time proportional to the number of variables N. A recently proposed quantum algorithm shows that quantum computers could solve linear systems in a time scale of order log(N), giving an exponential speedup over classical computers. Here we realize the simplest instance of this algorithm, solving 2×2 linear equations for various input vectors on a quantum computer. We use four quantum bits and four controlled logic gates to implement every subroutine required, demonstrating the working principle of this algorithm.

  15. Linear and non-linear interdependence of EEG and HRV frequency bands in human sleep.

    PubMed

    Chaparro-Vargas, Ramiro; Dissanayaka, P Chamila; Patti, Chanakya Reddy; Schilling, Claudia; Schredl, Michael; Cvetkovic, Dean

    2014-01-01

    The characterisation of functional interdependencies of the autonomic nervous system (ANS) stands an evergrowing interest to unveil electroencephalographic (EEG) and Heart Rate Variability (HRV) interactions. This paper presents a biosignal processing approach as a supportive computational resource in the estimation of sleep dynamics. The application of linear, non-linear methods and statistical tests upon 10 overnight polysomnographic (PSG) recordings, allowed the computation of wavelet coherence and phase locking values, in order to identify discerning features amongst the clinical healthy subjects. Our findings showed that neuronal oscillations θ, α and σ interact with cardiac power bands at mid-to-high rank of coherence and phase locking, particularly during NREM sleep stages.

  16. Quantile regression models of animal habitat relationships

    USGS Publications Warehouse

    Cade, Brian S.

    2003-01-01

    Typically, all factors that limit an organism are not measured and included in statistical models used to investigate relationships with their environment. If important unmeasured variables interact multiplicatively with the measured variables, the statistical models often will have heterogeneous response distributions with unequal variances. Quantile regression is an approach for estimating the conditional quantiles of a response variable distribution in the linear model, providing a more complete view of possible causal relationships between variables in ecological processes. Chapter 1 introduces quantile regression and discusses the ordering characteristics, interval nature, sampling variation, weighting, and interpretation of estimates for homogeneous and heterogeneous regression models. Chapter 2 evaluates performance of quantile rankscore tests used for hypothesis testing and constructing confidence intervals for linear quantile regression estimates (0 ≤ τ ≤ 1). A permutation F test maintained better Type I errors than the Chi-square T test for models with smaller n, greater number of parameters p, and more extreme quantiles τ. Both versions of the test required weighting to maintain correct Type I errors when there was heterogeneity under the alternative model. An example application related trout densities to stream channel width:depth. Chapter 3 evaluates a drop in dispersion, F-ratio like permutation test for hypothesis testing and constructing confidence intervals for linear quantile regression estimates (0 ≤ τ ≤ 1). Chapter 4 simulates from a large (N = 10,000) finite population representing grid areas on a landscape to demonstrate various forms of hidden bias that might occur when the effect of a measured habitat variable on some animal was confounded with the effect of another unmeasured variable (spatially and not spatially structured). Depending on whether interactions of the measured habitat and unmeasured variable were negative (interference interactions) or positive (facilitation interactions), either upper (τ > 0.5) or lower (τ < 0.5) quantile regression parameters were less biased than mean rate parameters. Sampling (n = 20 - 300) simulations demonstrated that confidence intervals constructed by inverting rankscore tests provided valid coverage of these biased parameters. Quantile regression was used to estimate effects of physical habitat resources on a bivalve mussel (Macomona liliana) in a New Zealand harbor by modeling the spatial trend surface as a cubic polynomial of location coordinates.

  17. Linear models: permutation methods

    USGS Publications Warehouse

    Cade, B.S.; Everitt, B.S.; Howell, D.C.

    2005-01-01

    Permutation tests (see Permutation Based Inference) for the linear model have applications in behavioral studies when traditional parametric assumptions about the error term in a linear model are not tenable. Improved validity of Type I error rates can be achieved with properly constructed permutation tests. Perhaps more importantly, increased statistical power, improved robustness to effects of outliers, and detection of alternative distributional differences can be achieved by coupling permutation inference with alternative linear model estimators. For example, it is well-known that estimates of the mean in linear model are extremely sensitive to even a single outlying value of the dependent variable compared to estimates of the median [7, 19]. Traditionally, linear modeling focused on estimating changes in the center of distributions (means or medians). However, quantile regression allows distributional changes to be estimated in all or any selected part of a distribution or responses, providing a more complete statistical picture that has relevance to many biological questions [6]...

  18. Nonlinear vs. linear biasing in Trp-cage folding simulations

    NASA Astrophysics Data System (ADS)

    Spiwok, Vojtěch; Oborský, Pavel; Pazúriková, Jana; Křenek, Aleš; Králová, Blanka

    2015-03-01

    Biased simulations have great potential for the study of slow processes, including protein folding. Atomic motions in molecules are nonlinear, which suggests that simulations with enhanced sampling of collective motions traced by nonlinear dimensionality reduction methods may perform better than linear ones. In this study, we compare an unbiased folding simulation of the Trp-cage miniprotein with metadynamics simulations using both linear (principle component analysis) and nonlinear (Isomap) low dimensional embeddings as collective variables. Folding of the mini-protein was successfully simulated in 200 ns simulation with linear biasing and non-linear motion biasing. The folded state was correctly predicted as the free energy minimum in both simulations. We found that the advantage of linear motion biasing is that it can sample a larger conformational space, whereas the advantage of nonlinear motion biasing lies in slightly better resolution of the resulting free energy surface. In terms of sampling efficiency, both methods are comparable.

  19. Nonlinear vs. linear biasing in Trp-cage folding simulations.

    PubMed

    Spiwok, Vojtěch; Oborský, Pavel; Pazúriková, Jana; Křenek, Aleš; Králová, Blanka

    2015-03-21

    Biased simulations have great potential for the study of slow processes, including protein folding. Atomic motions in molecules are nonlinear, which suggests that simulations with enhanced sampling of collective motions traced by nonlinear dimensionality reduction methods may perform better than linear ones. In this study, we compare an unbiased folding simulation of the Trp-cage miniprotein with metadynamics simulations using both linear (principle component analysis) and nonlinear (Isomap) low dimensional embeddings as collective variables. Folding of the mini-protein was successfully simulated in 200 ns simulation with linear biasing and non-linear motion biasing. The folded state was correctly predicted as the free energy minimum in both simulations. We found that the advantage of linear motion biasing is that it can sample a larger conformational space, whereas the advantage of nonlinear motion biasing lies in slightly better resolution of the resulting free energy surface. In terms of sampling efficiency, both methods are comparable.

  20. A Family of Ellipse Methods for Solving Non-Linear Equations

    ERIC Educational Resources Information Center

    Gupta, K. C.; Kanwar, V.; Kumar, Sanjeev

    2009-01-01

    This note presents a method for the numerical approximation of simple zeros of a non-linear equation in one variable. In order to do so, the method uses an ellipse rather than a tangent approach. The main advantage of our method is that it does not fail even if the derivative of the function is either zero or very small in the vicinity of the…

Top