Science.gov

Sample records for bioimpedance estimation theoretical

  1. Effects of Intense Physical Activity with Free Water Replacement on Bioimpedance Parameters and Body Fluid Estimates

    NASA Astrophysics Data System (ADS)

    Neves, E. B.; Ulbricht, L.; Krueger, E.; Romaneli, E. F. R.; Souza, M. N.

    2012-12-01

    Authors have emphasized the need for previous care in order to perform reliable bioimpedance acquisition. Despite of this need some authors have reported that intense physical training has little effect on Bioimpedance Analysis (BIA), while other ones have observed significant effects on bioimpedance parameters in the same condition, leading to body composition estimates considered incompatible with human physiology. The aim of this work was to quantify the changes in bioimpedance parameters, as well as in body fluids estimates by BIA, after four hours of intense physical activity with free water replacement in young males. Xitron Hydra 4200 equipment was used to acquire bioimpedance data before and immediately after the physical training. After data acquisition body fluids were estimates from bioimpedance parameters. Height and weight of all subjects were also acquired to the nearest 0.1 cm and 0.1 kg, respectively. Results point that among the bioimpedance parameter, extracellular resistance presented the most coherent behavior, leading to reliable estimates of the extracellular fluid and part of the total body water. Results also show decreases in height and weight of the participants, which were associated to the decrease in body hydration and in intervertebral discs.

  2. Bioimpedance spectroscopy for the estimation of body fluid volumes in mice.

    PubMed

    Chapman, M E; Hu, L; Plato, C F; Kohan, D E

    2010-07-01

    Conventional indicator dilution techniques for measuring body fluid volume are laborious, expensive, and highly invasive. Bioimpedance spectroscopy (BIS) may be a useful alternative due to being rapid, minimally invasive, and allowing repeated measurements. BIS has not been reported in mice; hence we examined how well BIS estimates body fluid volume in mice. Using C57/Bl6 mice, the BIS system demonstrated <5% intermouse variation in total body water (TBW) and extracellular (ECFV) and intracellular fluid volume (ICFV) between animals of similar body weight. TBW, ECFV, and ICFV differed between heavier male and lighter female mice; however, the ratio of TBW, ECFV, and ICFV to body weight did not differ between mice and corresponded closely to values in the literature. Furthermore, repeat measurements over 1 wk demonstrated <5% intramouse variation. Default resistance coefficients used by the BIS system, defined for rats, produced body composition values for TBW that exceeded body weight in mice. Therefore, body composition was measured in mice using a range of resistance coefficients. Resistance values at 10% of those defined for rats provided TBW, ECFV, and ICFV ratios to body weight that were similar to those obtained by conventional isotope dilution. Further evaluation of the sensitivity of the BIS system was determined by its ability to detect volume changes after saline infusion; saline provided the predicted changes in compartmental fluid volumes. In summary, BIS is a noninvasive and accurate method for the estimation of body composition in mice. The ability to perform serial measurements will be a useful tool for future studies.

  3. Comparison of cardiac output determined by bioimpedance and bioreactance methods at rest and during exercise.

    PubMed

    Jakovljevic, Djordje G; Moore, Sarah; Hallsworth, Kate; Fattakhova, Gulnar; Thoma, Christian; Trenell, Michael I

    2012-04-01

    Bioreactance is a novel non-invasive method for cardiac output measurement that involves the analysis of blood flow-dependent changes in the phase shifts of electrical currents applied across the chest. The present study (1) compared resting and exercise cardiac outputs determined by bioreactance and bioimpedance methods and those estimated from measured oxygen consumption, (2) determined the relationship between cardiac output and oxygen consumption, and (3) assessed the agreement between the bioreactance and bioimpedance methods. Twelve healthy subjects (aged 30 ± 4 years) performed graded cardiopulmonary exercise test on a recumbent cycle ergometer on two occasions, 1 week apart. Cardiac output was monitored at rest, at 30, 50, 70, 90, 150 W and at peak exercise intensity by bioreactance and bioimpedance and expired gases collected. Resting cardiac output was not significantly different between the bioreactance and bioimpedance methods (6.2 ± 1.4 vs. 6.5 ± 1.4 l min(-1), P = 0.42). During exercise cardiac outputs were correlated with oxygen uptake for both bioreactance (r = 0.84, P < 0.01) and bioimpedance techniques (r = 0.82, P < 0.01). At peak exercise bioimpedance estimated significantly lower cardiac outputs than both bioreactance and theoretically calculated cardiac output (14.3 ± 2.6 vs. 17.5 ± 5.2 vs. 16.9 ± 4.9 l min(-1), P < 0.05). Bland-Altman analyses including data from rest and exercise demonstrated that the bioimpedance method reported ~1.5 l min(-1) lower cardiac output than bioreactance with lower and upper limits of agreement of -2.98 to 5.98 l min(-1). Bioimpedance and bioreactance methods provide different cardiac output estimates, particularly at high exercise intensity, and therefore the two methods cannot be used interchangeably. In contrast with bioimpedance, bioreactance cardiac outputs are similar to those estimated from measured oxygen consumption.

  4. The Theory and Fundamentals of Bioimpedance Analysis in Clinical Status Monitoring and Diagnosis of Diseases

    PubMed Central

    Khalil, Sami F.; Mohktar, Mas S.; Ibrahim, Fatimah

    2014-01-01

    Bioimpedance analysis is a noninvasive, low cost and a commonly used approach for body composition measurements and assessment of clinical condition. There are a variety of methods applied for interpretation of measured bioimpedance data and a wide range of utilizations of bioimpedance in body composition estimation and evaluation of clinical status. This paper reviews the main concepts of bioimpedance measurement techniques including the frequency based, the allocation based, bioimpedance vector analysis and the real time bioimpedance analysis systems. Commonly used prediction equations for body composition assessment and influence of anthropometric measurements, gender, ethnic groups, postures, measurements protocols and electrode artifacts in estimated values are also discussed. In addition, this paper also contributes to the deliberations of bioimpedance analysis assessment of abnormal loss in lean body mass and unbalanced shift in body fluids and to the summary of diagnostic usage in different kinds of conditions such as cardiac, pulmonary, renal, and neural and infection diseases. PMID:24949644

  5. The theory and fundamentals of bioimpedance analysis in clinical status monitoring and diagnosis of diseases.

    PubMed

    Khalil, Sami F; Mohktar, Mas S; Ibrahim, Fatimah

    2014-06-19

    Bioimpedance analysis is a noninvasive, low cost and a commonly used approach for body composition measurements and assessment of clinical condition. There are a variety of methods applied for interpretation of measured bioimpedance data and a wide range of utilizations of bioimpedance in body composition estimation and evaluation of clinical status. This paper reviews the main concepts of bioimpedance measurement techniques including the frequency based, the allocation based, bioimpedance vector analysis and the real time bioimpedance analysis systems. Commonly used prediction equations for body composition assessment and influence of anthropometric measurements, gender, ethnic groups, postures, measurements protocols and electrode artifacts in estimated values are also discussed. In addition, this paper also contributes to the deliberations of bioimpedance analysis assessment of abnormal loss in lean body mass and unbalanced shift in body fluids and to the summary of diagnostic usage in different kinds of conditions such as cardiac, pulmonary, renal, and neural and infection diseases.

  6. Sound velocity estimation: A system theoretic approach

    SciTech Connect

    Candy, J.V.; Sullivan, E.J.

    1993-07-30

    A system-theoretic approach is proposed to investigate the feasibility of reconstructing a sound velocity profile (SVP) from acoustical hydrophone measurements. This problem is based on a state-space representation of the normal-mode propagation model. It is shown that this representation can be utilized to investigate the so-called observability of the SVP from noisy measurement data. A model-based processor is developed to extract this information and it is shown that even in cases where limited SVP information is available, the SVP can be estimated using this approach.

  7. Electromagnetic holographic imaging of bioimpedance

    NASA Astrophysics Data System (ADS)

    Smith, Dexter G.; Ko, Harvey W.; Lee, Benjamin R.; Partin, Alan W.

    1998-05-01

    The electromagnetic bioimpedance method has successfully measured the very subtle conductivity changes associated with brain edema and prostate tumor. This method provides noninvasive measurements using non-ionizing magnetic fields applied with a small coil that avoids the use of contact electrodes. This paper introduces results from combining a holographic signal processing algorithm and a low power coil system that helps provide the 3D image of impedance contrast that should make the noninvasive electromagnetic bioimpedance method useful in health care.

  8. Theoretical Estimate of Maximum Possible Nuclear Explosion

    DOE R&D Accomplishments Database

    Bethe, H. A.

    1950-01-31

    The maximum nuclear accident which could occur in a Na-cooled, Be moderated, Pu and power producing reactor is estimated theoretically. (T.R.H.) 2O82 Results of nuclear calculations for a variety of compositions of fast, heterogeneous, sodium-cooled, U-235-fueled, plutonium- and power-producing reactors are reported. Core compositions typical of plate-, pin-, or wire-type fuel elements and with uranium as metal, alloy, and oxide were considered. These compositions included atom ratios in the following range: U-23B to U-235 from 2 to 8; sodium to U-235 from 1.5 to 12; iron to U-235 from 5 to 18; and vanadium to U-235 from 11 to 33. Calculations were performed to determine the effect of lead and iron reflectors between the core and blanket. Both natural and depleted uranium were evaluated as the blanket fertile material. Reactors were compared on a basis of conversion ratio, specific power, and the product of both. The calculated results are in general agreement with the experimental results from fast reactor assemblies. An analysis of the effect of new cross-section values as they became available is included. (auth)

  9. Bioimpedance spectroscopy in haemodynamic analysis

    NASA Astrophysics Data System (ADS)

    MCullagh, W. A.; Ward, L. C.

    2010-04-01

    Venous insufficiency is estimated to affect between 10% and 35% of the population in the US with symptoms ranging from mild discomfort to chronic ulceration which may reduce quality of life. Early diagnosis is the key to pre-emptive treatment. We have previously reported [5] an impedance technique for measurement of calf muscle pump function although it was noted that the results could be confounded by change in limb geometry during the exercise protocol. We report here a modified protocol to account for change in limb geometry. Impedance of a 20 cm segment of the calf was continuously recorded using an SFB7 bioimpedance spectrometer whilst subjects performed a sequence of manoeuvres: a) supine with leg raised (for 10 min); b) standing (4 s); c) plantar flexion (tiptoe, 4 s); d) standing elevated on one leg removing tension on the measured leg with the foot horizontal (4 s); e) with the leg relaxed (4 s) and then the sequence repeated. The impedance ratio, k = (e-d)/(c-b), was the proportion of the impedance change, occurring during calf muscle pumping, due primarily to change in limb shape only, i.e. independent of the impedance change due to ejection of blood as a function of calf muscle pump action. Thus (1-k) can be used to correct the ejection fraction (%) calculated as (c-b)/(a-b)*100 for the confounding effect of change in limb geometry. Ejection fractions calculated by this method in 10 control subjects were 51.5 ± 30.1% with no values greater than 100% as found previously.

  10. Body fat assessment by a new bipedal bioimpedance instrument in normal weight and obese women.

    PubMed

    Hainer, V; Kunesová, M; Parízková, J; Stich, V; Horejs, J; Müller, L

    1995-01-01

    The aim of the study was to evaluate a new bioimpedance method for assessment of body fat employing bipedal electrodes instead of those attached to both upper and lower extremities. The new analyzer (TBF-105, Tanita Corp., Tokyo, Japan) enables simultaneous measurements of body weight and total body resistance in a subject standing on the stainless steel electrodes. The instrument was tested in both normal weight and obese women. Fat mass estimated by bipedal bioimpedance was highly correlated with that determined by hydrodensitometry (n = 145, r = 0.945, p < 0.001). Fat mass estimated by bipedal bioimpedance significantly correlated not only with subcutaneous fat measured as a sum of 10 skinfolds (r = 0.758, p < 0.001) but also with visceral fat determined as an area on CT scan (r = 0.780, p < 0.001). Anthropometric variables did not substantially influence the differences revealed in fat mass determined by bipedal bioimpedance and by densitometry. An overestimation of total fat mass by bipedal bioimpedance has not been revealed in severely obese individuals, even in those with higher fat accumulation in the limb region. In conclusion, our data have demonstrated that the new bioimpedance instrument employing bipedal electrodes represents a reliable tool for rapid body fat assessment in both normal weight and obese women.

  11. Multilead measurement system for the time-domain analysis of bioimpedance magnitude.

    PubMed

    Gracia, J; Seppa, V P; Viik, J; Hyttinen, J

    2012-08-01

    Bioimpedance measurement applications range from the characterization of organic matter to the monitoring of biological signals and physiological parameters. Occasionally, multiple bioimpedances measured in different locations are combined in order to solve complex problems or produce enhanced physiological measures. The present multilead bioimpedance measurement methods are mainly focused on electrical impedance tomography. Systems designed to suit other multilead applications are lacking. In this study, a novel multilead bioimpedance measurement system was designed. This was particularly aimed at the time-domain analysis of bioimpedance magnitude. Frequency division multiplexing was used to avoid overlapping between excitation signals; undersampling, to reduce the hardware requirements; and power isolated active current sources, to reduce the electrical interactions between leads. These theoretical concepts were implemented on a prototype device. The prototype was tested on equivalent circuits and a saline tank in order to assess excitation signal interferences and electrical interactions between leads. The results showed that the proposed techniques are functional and the system's validity was demonstrated on a real application, multilead impedance pneumography. Potential applications and further improvements were discussed. It was concluded that the novel approach potentially enables accurate and relatively low-power multilead bioimpedance measurements systems.

  12. Some Basic Techniques in Bioimpedance Research

    NASA Astrophysics Data System (ADS)

    Martinsen, Ørjan G.

    2004-09-01

    Any physiological or anatomical changes in a biological material will also change its electrical properties. Hence, bioimpedance measurements can be used for diagnosing or classification of tissue. Applications are numerous within medicine, biology, cosmetics, food industry, sports, etc, and different basic approaches for the development of bioimpedance techniques are discussed in this paper.

  13. Potential benefits of remote sensing: Theoretical framework and empirical estimate

    NASA Technical Reports Server (NTRS)

    Eisgruber, L. M.

    1972-01-01

    A theoretical framwork is outlined for estimating social returns from research and application of remote sensing. The approximate dollar magnitude is given of a particular application of remote sensing, namely estimates of corn production, soybeans, and wheat. Finally, some comments are made on the limitations of this procedure and on the implications of results.

  14. Bioimpedance spectroscopy as technique of hematological and biochemical analysis of blood

    NASA Astrophysics Data System (ADS)

    Malahov, M. V.; Smirnov, A. V.; Nikolaev, D. V.; Melnikov, A. A.; Vikulov, A. D.

    2010-04-01

    Bioimpedance spectroscopy may become a useful method for the express analysis and monitoring of blood parameters. The aim of this study was to identify biochemical and hematological parameters of blood that can be accurately predicted by means of bioimpedance technique. Hematological (red blood cell and white blood cell parameters) and biochemical (total proteins, albumins, fibrinogen, sodium, potassium, chloride ion concentrations in plasma) parameters were measured with a hematological analyzer and routine methods. Bioimpedance spectroscopy of the whole blood (1.5 ml) in frequency range 5-500 kHz (31 frequencies) was performed using BIA analyzer ABC-01 "Medass". Frequency relationships of resistance and reactance of the whole blood and the parameters of the Cole model were investigated. Close simple and multiple correlations of bioimpedance indices were observed only with erythrocyte parameters (Ht, Hb, RBC). Thus bioimpedance analysis of the whole blood can accurately predict red cell parameters but it is less effective for estimation of plasma biochemical and white cell parameters.

  15. Estimating the theoretical semivariogram from finite numbers of measurements

    USGS Publications Warehouse

    Zheng, Lingyun; Silliman, S.E.

    2000-01-01

    We investigate from a theoretical basis the impacts of the number, location, and correlation among measurement points on the quality of an estimate of the semivariogram. The unbiased nature of the semivariogram estimator ??/(r) is first established for a general random process Z(x). The variance of ??z(r) is then derived as a function of the sampling parameters (the number of measurements and their locations). In applying this function to the case of estimating the semivariograms of the transmissivity and the hydraulic head field, it is shown that the estimation error depends on the number of the data pairs, the correlation among the data pairs (which, in turn, are determined by the form of the underlying semivariogram ??(r)), the relative locations of the data pairs, and the separation distance at which the semivariogram is to be estimated. Thus design of an optimal sampling program for semivariogram estimation should include consideration of each of these factors. Further, the function derived for the variance of ??z(r) is useful in determining the reliability of a semivariogram developed from a previously established sampling design.

  16. PART I: Theoretical Site Response Estimation for Microzoning Purposes

    NASA Astrophysics Data System (ADS)

    Triantafyllidis, P.; Suhadolc, P.; Hatzidimitriou, P. M.; Anastasiadis, A.; Theodulidis, N.

    We estimate the theoretical site response along seven cross sections located in the city of Thessaloniki (Greece). For this purpose the 2-D structural models used are based on the known geometry and the dynamic soil properties derived from borehole measurements and other geophysical techniques. Several double-couple sources have been employed to generate the seismic wavefield, and a hybrid method that combines the modal summation with finite differences, has been deployed to produce synthetic accelerograms to a maximum frequency of 6 Hz for all components of motion. The ratios between the response spectra of signals derived for the 2-D local model and the corresponding spectra of signals derived for the 1-D bedrock reference model at the same site, allow us to estimate the site response due to lateral heterogeneities. We interpret the results in terms of both geological and geometrical features of the models and of the characteristics of the wave propagation. The cases discussed confirm that the geometry and depth of the rock basement, along with the impedance contrast, are responsible for ground amplification phenomena such as edge effects and generation and entrapment of local surface waves. Our analysis also confirms that the peak ground acceleration is not well correlated with damage and that a substantially better estimator for possible damage is the spectral amplification.

  17. In vivo electrical bioimpedance characterization of human lung tissue during the bronchoscopy procedure. A feasibility study.

    PubMed

    Sanchez, Benjamin; Vandersteen, Gerd; Martin, Irene; Castillo, Diego; Torrego, Alfons; Riu, Pere J; Schoukens, Johan; Bragos, Ramon

    2013-07-01

    Lung biopsies form the basis for the diagnosis of lung cancer. However, in a significant number of cases bronchoscopic lung biopsies fail to provide useful information, especially in diffuse lung disease, so more aggressive procedures are required. Success could be improved using a guided electronic biopsy based on multisine electrical impedance spectroscopy (EIS), a technique which is evaluated in this paper. The theoretical basis of the measurement method and the instrument developed are described, characterized and calibrated while the performance of the instrument is assessed by experiments to evaluate the noise and nonlinear source of errors from measurements on phantoms. Additional preliminary results are included to demonstrate that it is both feasible and safe to monitor in vivo human lung tissue electrical bioimpedance (EBI) during the bronchoscopy procedure. The time required for performing bronchoscopy is not extended because the bioimpedance measurements, present no complications, tolerance problems or side effects among any of the patients measured.

  18. Nonparametric probability density estimation by optimization theoretic techniques

    NASA Technical Reports Server (NTRS)

    Scott, D. W.

    1976-01-01

    Two nonparametric probability density estimators are considered. The first is the kernel estimator. The problem of choosing the kernel scaling factor based solely on a random sample is addressed. An interactive mode is discussed and an algorithm proposed to choose the scaling factor automatically. The second nonparametric probability estimate uses penalty function techniques with the maximum likelihood criterion. A discrete maximum penalized likelihood estimator is proposed and is shown to be consistent in the mean square error. A numerical implementation technique for the discrete solution is discussed and examples displayed. An extensive simulation study compares the integrated mean square error of the discrete and kernel estimators. The robustness of the discrete estimator is demonstrated graphically.

  19. Theoretical estimation of the aqueous pKas of thiols

    NASA Astrophysics Data System (ADS)

    Hunter, Nora E.; Seybold, Paul G.

    2014-02-01

    The ionisation state of a compound is a key parameter influencing the compound's activity as a drug, metabolite, pollutant, or other active chemical agent. Sulfhydrol compounds (thiols) tend to be considerably more acidic than their hydroxyl (alcohol) analogues. In this report, quantum chemical approaches previously used for the estimation of the aqueous pKas of alcohols are applied to the estimation of the acidities of thiols. Acidity estimates obtained from the general-purpose SPARC calculational programme (S.H. Hilal, S.W. Karickhoff, and L.A. Carreira, Quant. Struct.-Act. Relat. 14, 348 (1995)) and the ACD/Labs PhysChem Suite v12 programme package are employed as benchmarks. Quantum chemical calculations were performed using both the semiempirical RM1 method and the density functional theory B3LYP/6-31+G* method. The effectiveness of the SM5.4 and SM8 solvent models in estimating the aqueous-phase acidities was also evaluated. All of the approaches examined demonstrated strong correlations with the experimental acidity values.

  20. PREFACE: International Conference on Electrical Bioimpedance

    NASA Astrophysics Data System (ADS)

    Sadleir, Rosalind; Woo, Eung Je

    2010-04-01

    logo The XIVth International Conference on Electrical Bioimpedance, held in conjunction with the 11th Conference on Biomedical Applications of EIT (ICEBI & EIT 2010), took place from 4-8 April 2010 in the Reitz Union of the University of Florida, in Gainesville, USA. This was the first time since its inception in 1969 that the ICEBI was held in the United States. As in the last three conferences (Graz 2007, Gdansk 2004 and Oslo 2001) the ICEBI was combined with the Conference on Biomedical Applications of EIT - a mutually beneficial approach for those interested in the biophysics of tissue electrical properties and those developing imaging methods and measurement systems based thereon. This year's conference was particularly notable for the many papers presented on hybrid and emerging imaging techniques such as Electric Property Tomography (EPT), Magneto Acoustic Tomography using Magnetic Induction (MAT-MI) and Magnetic Resonance Electrical Impedance Tomography (MREIT); sessions on Cell Scale Impedance, Cardiac Impedance and Imaging Neural Activity. About 180 scientists from all over the world attended, including keynote speakers on topics of fundamental electromagnetic principles (Jaakko Malmivuo), Electrical Source and Impedance Imaging (Bin He), Bioimpedance applications in Nephrology, (Nathan Levin), and Lung EIT (Gerhard Wolf). The papers in this volume are peer-reviewed four-page works selected from over 150 presented in oral and poster sessions at the conference. The complete program is available from the conference website.

  1. A handheld and textile-enabled bioimpedance system for ubiquitous body composition analysis. An initial functional validation.

    PubMed

    Ferreira Gonzalez, Javier; Pau de la Cruz, Ivan; Lindecrantz, Kaj; Seoane, Fernando

    2016-11-15

    In recent years, many efforts have been made to promote a healthcare paradigm shift from the traditional reactive hospital-centered healthcare approach towards a proactive, patient-oriented and self-managed approach that could improve service quality and help reduce costs while contributing to sustainability. Managing and caring for patients with chronic diseases accounts over 75% of healthcare costs in developed countries. One of the most resource demanding diseases is chronic kidney disease (CKD), which often leads to a gradual and irreparable loss of renal function, with up to 12% of the population showing signs of different stages of this disease. Peritoneal dialysis and home haemodialysis are life-saving home-based renal replacement treatments that, compared to conventional in-center hemodialysis, provide similar long-term patient survival, less restrictions of life-style, such as a more flexible diet, and better flexibility in terms of treatment options and locations. Bioimpedance has been largely used clinically for decades in nutrition for assessing body fluid distributions. Moreover, bioimpedance methods are used to assess the overhydratation state of CKD patients, allowing clinicians to estimate the amount of fluid that should be removed by ultrafiltration. In this work, the initial validation of a handheld bioimpedance system for the assessment of body fluid status that could be used to assist the patient in home-based CKD treatments is presented. The body fluid monitoring system comprises a custom-made handheld tetrapolar bioimpedance spectrometer and a textile-based electrode garment for total body fluid assessment. The system performance was evaluated against the same measurements acquired using a commercial bioimpedance spectrometer for medical use on several voluntary subjects. The analysis of the measurement results and the comparison of the fluid estimations indicated that both devices are equivalent from a measurement performance perspective

  2. Theoretical Formalism To Estimate the Positron Scattering Cross Section.

    PubMed

    Singh, Suvam; Dutta, Sangita; Naghma, Rahla; Antony, Bobby

    2016-07-21

    A theoretical formalism is introduced in this article to calculate the total cross sections for positron scattering. This method incorporates positron-target interaction in the spherical complex optical potential formalism. The study of positron collision has been quite subtle until now. However, recently, it has emerged as an interesting area due to its role in atomic and molecular structure physics, astrophysics, and medicine. With the present method, the total cross sections for simple atoms C, N, and O and their diatomic molecules C2, N2, and O2 are obtained and compared with existing data. The total cross section obtained in the present work gives a more consistent shape and magnitude than existing theories. The characteristic dip below 10 eV is identified due to the positronium formation. The deviation of the present cross section with measurements at energies below 10 eV is attributed to the neglect of forward angle-discrimination effects in experiments, the inefficiency of additivity rule for molecules, empirical treatment of positronium formation, and the neglect of annihilation reactions. In spite of these deficiencies, the present results show consistent behavior and reasonable agreement with previous data, wherever available. Besides, this is the first computational model to report positron scattering cross sections over the energy range from 1 to 5000 eV.

  3. Theoretical informational estimation of spatial signal restoration accuracy

    NASA Astrophysics Data System (ADS)

    Shultz, Sergey V.; Bakut, Peter A.; Shumilov, Yurij P.

    1994-12-01

    In the article the relationship between the accuracy of random spatial signal restoration and information quantity about signal, that is inherent in the random observing function, is considered on the base of information and statistical decision theory notions. It is shown that in special case of statistically homogeneous strong correlated signal in the observing area the obtained lower bound of signal restoration error dispersion is analogous to the lower bound defined with Cramer-Rao inequality which is widely used in statistical estimation theory for analysing of the discrete parameters potential accu- racy. The obtained results are used for accuracy analysis of satellite information systems which are employed in the optical wavelength region. The image restoration accuracy is considered in dependence of the influence of small particles ejected from satellite and the quantum noise being, the result of irradiance-photodetector substance interaction.

  4. Implantable bioimpedance monitor using ZigBee.

    PubMed

    Bogónez-Franco, P; Bragós, R; Bayés-Genis, A; Rosell-Ferrer, J

    2009-01-01

    In this paper, a novel implantable bioimpedance monitor using a free ZigBee protocol for the transmission of the measured data is described. The application field is the tissue and organ monitoring through electrical impedance spectroscopy in the 100 Hz - 200 kHz range. The specific application is the study of the viability and evolution of engineered tissue in cardiac regeneration. Additionally to the telemetric feature, the measured data are stored in a memory for backup purposes and can be downloaded at any time after an RF link break. In the debugging prototype, the system autonomy exceeds 1 month when a 14 frequencies impedance spectrum is acquired every 5 minutes. In the current implementation, the effective range of the RF link is reduced and needs for a range extender placed near the animal. Current work deals with improving this range.

  5. Comparing geophysical measurements to theoretical estimates for soil mixtures at low pressures

    SciTech Connect

    Wildenschild, D; Berge, P A; Berryman, K G; Bonner, B P; Roberts, J J

    1999-01-15

    The authors obtained good estimates of measured velocities of sand-peat samples at low pressures by using a theoretical method, the self-consistent theory of Berryman (1980), using sand and porous peat to represent the microstructure of the mixture. They were unable to obtain useful estimates with several other theoretical approaches, because the properties of the quartz, air and peat components of the samples vary over several orders of magnitude. Methods that are useful for consolidated rock cannot be applied directly to unconsolidated materials. Instead, careful consideration of microstructure is necessary to adapt the methods successfully. Future work includes comparison of the measured velocity values to additional theoretical estimates, investigation of Vp/Vs ratios and wave amplitudes, as well as modeling of dry and saturated sand-clay mixtures (e.g., Bonner et al., 1997, 1998). The results suggest that field data can be interpreted by comparing laboratory measurements of soil velocities to theoretical estimates of velocities in order to establish a systematic method for predicting velocities for a full range of sand-organic material mixtures at various pressures. Once the theoretical relationship is obtained, it can be used to estimate the soil composition at various depths from field measurements of seismic velocities. Additional refining of the method for relating velocities to soil characteristics is useful for development inversion algorithms.

  6. The Theoretical Estimation of the Bioluminescent Efficiency of the Firefly via a Nonadiabatic Molecular Dynamics Simulation.

    PubMed

    Yue, Ling; Lan, Zhenggang; Liu, Ya-Jun

    2015-02-05

    The firefly is famous for its high bioluminescent efficiency, which has attracted both scientific and public attention. The chemical origin of firefly bioluminescence is the thermolysis of the firefly dioxetanone anion (FDO(-)). Although considerable theoretical research has been conducted, and several mechanisms were proposed to elucidate the high efficiency of the chemi- and bioluminescence of FDO(-), there is a lack of direct experimental and theoretical evidence. For the first time, we performed a nonadiabatic molecular dynamics simulation on the chemiluminescent decomposition of FDO(-) under the framework of the trajectory surface hopping (TSH) method and theoretically estimated the chemiluminescent quantum yield. The TSH simulation reproduced the gradually reversible charge-transfer initiated luminescence mechanism proposed in our previous study. More importantly, the current study, for the first time, predicted the bioluminescence efficiency of the firefly from a theoretical viewpoint, and the theoretical prediction efficiency is in good agreement with experimental measurements.

  7. Modeling the influence of body position in bioimpedance measurements.

    PubMed

    Medrano, G; Leonhardt, S; Zhang, P

    2007-01-01

    Bioimpedance Spectroscopy (BIS) enables the determination of the human body composition (e.g. fat content, water content). From this data, it is possible to draw conclusions about the person's health state. This technology can be easily implemented combined with low costs, which could be used for an easy use at home with a reliable accuracy. Nevertheless, external factors such as body position influence the measurements, limiting their accuracy and use. The use of modeling of these external factors and their influence on the body could be used to improve the accuracy of the bioimpedance spectroscopy and to extend it for a continuous monitoring. In this paper the results of the modeling of the body position for a localized bioimpedance measurement (thigh) and a comparison with measurements on 5 subjects lying down for 40 minutes are shown and discussed.

  8. Limitations of the spike-triggered averaging for estimating motor unit twitch force: a theoretical analysis.

    PubMed

    Negro, Francesco; Yavuz, Ş Utku; Yavuz, Utku Ş; Farina, Dario

    2014-01-01

    Contractile properties of human motor units provide information on the force capacity and fatigability of muscles. The spike-triggered averaging technique (STA) is a conventional method used to estimate the twitch waveform of single motor units in vivo by averaging the joint force signal. Several limitations of this technique have been previously discussed in an empirical way, using simulated and experimental data. In this study, we provide a theoretical analysis of this technique in the frequency domain and describe its intrinsic limitations. By analyzing the analytical expression of STA, first we show that a certain degree of correlation between the motor unit activities prevents an accurate estimation of the twitch force, even from relatively long recordings. Second, we show that the quality of the twitch estimates by STA is highly related to the relative variability of the inter-spike intervals of motor unit action potentials. Interestingly, if this variability is extremely high, correct estimates could be obtained even for high discharge rates. However, for physiological inter-spike interval variability and discharge rate, the technique performs with relatively low estimation accuracy and high estimation variance. Finally, we show that the selection of the triggers that are most distant from the previous and next, which is often suggested, is not an effective way for improving STA estimates and in some cases can even be detrimental. These results show the intrinsic limitations of the STA technique and provide a theoretical framework for the design of new methods for the measurement of motor unit force twitch.

  9. Detection of questionable occlusal carious lesions using an electrical bioimpedance method with fractional electrical model

    NASA Astrophysics Data System (ADS)

    Morais, A. P.; Pino, A. V.; Souza, M. N.

    2016-08-01

    This in vitro study evaluated the diagnostic performance of an alternative electric bioimpedance spectroscopy technique (BIS-STEP) detect questionable occlusal carious lesions. Six specialists carried out the visual (V), radiography (R), and combined (VR) exams of 57 sound or non-cavitated occlusal carious lesion teeth classifying the occlusal surfaces in sound surface (H), enamel caries (EC), and dentinal caries (DC). Measurements were based on the current response to a step voltage excitation (BIS-STEP). A fractional electrical model was used to predict the current response in the time domain and to estimate the model parameters: Rs and Rp (resistive parameters), and C and α (fractional parameters). Histological analysis showed caries prevalence of 33.3% being 15.8% hidden caries. Combined examination obtained the best traditional diagnostic results with specificity = 59.0%, sensitivity = 70.9%, and accuracy = 60.8%. There were statistically significant differences in bioimpedance parameters between the H and EC groups (p = 0.016) and between the H and DC groups (Rs, p = 0.006; Rp, p = 0.022, and α, p = 0.041). Using a suitable threshold for the Rs, we obtained specificity = 60.7%, sensitivity = 77.9%, accuracy = 73.2%, and 100% of detection for deep lesions. It can be concluded that BIS-STEP method could be an important tool to improve the detection and management of occlusal non-cavitated primary caries and pigmented sites.

  10. Theoretical and experimental investigations of sensor location for optimal aeroelastic system state estimation

    NASA Technical Reports Server (NTRS)

    Liu, G.

    1985-01-01

    One of the major concerns in the design of an active control system is obtaining the information needed for effective feedback. This involves the combination of sensing and estimation. A sensor location index is defined as the weighted sum of the mean square estimation errors in which the sensor locations can be regarded as estimator design parameters. The design goal is to choose these locations to minimize the sensor location index. The choice of the number of sensors is a tradeoff between the estimation quality based upon the same performance index and the total costs of installing and maintaining extra sensors. An experimental study for choosing the sensor location was conducted on an aeroelastic system. The system modeling which includes the unsteady aerodynamics model developed by Stephen Rock was improved. Experimental results verify the trend of the theoretical predictions of the sensor location index for different sensor locations at various wind speeds.

  11. Theoretical estimates of spherical and chromatic aberration in photoemission electron microscopy.

    PubMed

    Fitzgerald, J P S; Word, R C; Könenkamp, R

    2016-01-01

    We present theoretical estimates of the mean coefficients of spherical and chromatic aberration for low energy photoemission electron microscopy (PEEM). Using simple analytic models, we find that the aberration coefficients depend primarily on the difference between the photon energy and the photoemission threshold, as expected. However, the shape of the photoelectron spectral distribution impacts the coefficients by up to 30%. These estimates should allow more precise correction of aberration in PEEM in experimental situations where the aberration coefficients and precise electron energy distribution cannot be readily measured.

  12. A theoretical estimation for the optimal network robustness measure R against malicious node attacks

    NASA Astrophysics Data System (ADS)

    Ma, Liangliang; Liu, Jing; Duan, Boping; Zhou, Mingxing

    2015-07-01

    In a recent work (Schneider C. M. et al., Proc. Natl. Acad. Sci. U.S.A., 108 (2011) 3838), Schneider et al. introduced an effective measure R to evaluate the network robustness against malicious attacks on nodes. Take R as the objective function, they used a heuristic algorithm to optimize the network robustness. In this paper, a theoretical analysis is conducted to estimate the value of R for different types of networks, including regular networks, WS networks, ER networks, and BA networks. The experimental results show that the theoretical value of R is approximately equal to that of optimized networks. Furthermore, the theoretical analysis also shows that regular networks are the most robust than other networks. To validate this result, a heuristic method is proposed to optimize the network structure, in which the degree distribution can be changed and the number of nodes and edges remains invariant. The optimization results show that the degree of most nodes in the optimal networks is close to the average degree, and the optimal network topology is close to regular networks, which confirms the theoretical analysis.

  13. Theoretical estimates of maximum fields in superconducting resonant radio frequency cavities: stability theory, disorder, and laminates

    NASA Astrophysics Data System (ADS)

    Liarte, Danilo B.; Posen, Sam; Transtrum, Mark K.; Catelani, Gianluigi; Liepe, Matthias; Sethna, James P.

    2017-03-01

    Theoretical limits to the performance of superconductors in high magnetic fields parallel to their surfaces are of key relevance to current and future accelerating cavities, especially those made of new higher-T c materials such as Nb3Sn, NbN, and MgB2. Indeed, beyond the so-called superheating field {H}{sh}, flux will spontaneously penetrate even a perfect superconducting surface and ruin the performance. We present intuitive arguments and simple estimates for {H}{sh}, and combine them with our previous rigorous calculations, which we summarize. We briefly discuss experimental measurements of the superheating field, comparing to our estimates. We explore the effects of materials anisotropy and the danger of disorder in nucleating vortex entry. Will we need to control surface orientation in the layered compound MgB2? Can we estimate theoretically whether dirt and defects make these new materials fundamentally more challenging to optimize than niobium? Finally, we discuss and analyze recent proposals to use thin superconducting layers or laminates to enhance the performance of superconducting cavities. Flux entering a laminate can lead to so-called pancake vortices; we consider the physics of the dislocation motion and potential re-annihilation or stabilization of these vortices after their entry.

  14. Effect of osmotic pressure to bioimpedance indexes of erythrocyte suspensions

    NASA Astrophysics Data System (ADS)

    Melnikov, A. A.; Nikolaev, D. V.; Malahov, M. V.; Smirnov, A. V.

    2012-12-01

    In the paper we studied effects of osmotic modification of red blood cells on bioimpedance parameters of erythrocyte suspension. The Cole parameters: the extracellular (Re) and intracellular (Ri) fluid resistance, the Alpha parameter, the characteristic frequency (Fchar) and the cell membranes capacitance (Cm) of concentrated erythrocyte suspensions were measured by bioimpedance analyser in the frequency range 5 - 500 kHz. Erythrocytes were incubated in hypo-, hyper- and isoosmotic solutions to achieve changes in cell volume. It was found that Re and Alpha increased in the suspensions with low osmolarity and decreased in the hypertonic suspensions. Ri, Fchar and Cm were higher in the hyperosmotic and were lower in the hypoosmotic suspensions. Correlations of all BIS parameters with MCV were obtained, but multiple regression analysis showed that only Alpha parameter was independently related to MCV (β=0.77, p=0.01). Thus Alpha parameter may be related the mean corpuscular volume of cells.

  15. PART II: Comparison of Theoretical and Experimental Estimations of Site Effects

    NASA Astrophysics Data System (ADS)

    Triantafyllidis, P.; Hatzidimitriou, P. M.; Suhadolc, P.; Theodulidis, N.; Anastasiadis, A.

    To check the reliability and the quality of the theoretically estimated ground responses obtained from the 2-D simulation by the application of the hybrid method in PART-I, we compare some of them with those obtained at the same sites from observed data using the Standard Spectral Ratio (SSR). The comparison validates our synthetic modeling and shows that in cases of complex geometries, the use of at least 2-D numerical simulations is required in order to reliably evaluate site effects and thus facilitate the microzonation of the city of Thessaloniki.

  16. Effect of Influenza-Induced Fever on Human Bioimpedance Values

    PubMed Central

    Marini, Elisabetta; Buffa, Roberto; Contreras, Monica; Magris, Magda; Hidalgo, Glida; Sanchez, Wilmer; Ortiz, Vanessa; Urbaez, Maryluz; Cabras, Stefano; Blaser, Martin J.; Dominguez-Bello, Maria G.

    2015-01-01

    Background and Aims Bioelectrical impedance analysis (BIA) is a widely used technique to assess body composition and nutritional status. While bioelectrical values are affected by diverse variables, there has been little research on validation of BIA in acute illness, especially to understand prognostic significance. Here we report the use of BIA in acute febrile states induced by influenza. Methods Bioimpedance studies were conducted during an H1N1 influenza A outbreak in Venezuelan Amerindian villages from the Amazonas. Measurements were performed on 52 subjects between 1 and 40 years of age, and 7 children were re-examined after starting Oseltamivir treatment. Bioelectrical Impedance Vector Analysis (BIVA) and permutation tests were applied. Results For the entire sample, febrile individuals showed a tendency toward greater reactance (p=0.058) and phase angle (p=0.037) than afebrile individuals, while resistance and impedance were similar in the two groups. Individuals with repeated measurements showed significant differences in bioimpedance values associated with fever, including increased reactance (p<0.001) and phase angle (p=0.007), and decreased resistance (p=0.007) and impedance (p<0.001). Conclusions There are bioelectrical variations induced by influenza that can be related to dehydration, with lower extracellular to intracellular water ratio in febrile individuals, or a direct thermal effect. Caution is recommended when interpreting bioimpedance results in febrile states. PMID:25915945

  17. An efficient analysis of nanomaterial cytotoxicity based on bioimpedance

    NASA Astrophysics Data System (ADS)

    Kandasamy, Karthikeyan; Choi, Cheol Soo; Kim, Sanghyo

    2010-09-01

    In the emerging nanotechnology field, there is an urgent need for the development of a significant and sensitive method that can be used to analyse and compare the cytotoxicities of nanomaterials such as carbon nanotubes (CNTs) and gold nanoparticles (AuNPs), since such materials can be applied as contrast agents or drug delivery carriers. The bioimpedance system possesses great potential in many medical research fields including nanotechnology. Electric cell-substrate impedance sensing (ECIS) is a particular bioimpedance system that offers a real-time, non-invasive, and quantitative measurement method for the cytotoxicity of various materials. The present work compared the cytotoxicity of AuNPs to that of purchased single-walled carbon nanotubes (SWCNTs). The size-controlled and monodispersed AuNPs were synthesized under autoclaved conditions and reduced by ascorbic acid (AA) whereas the purchased SWCNTs were used without any surface modifications. Bioimpedance results were validated by conventional WST-1 and trypan blue assays, and transmission electron microscopy (TEM) and field emission scanning electron microscopy (FE-SEM) were performed to examine nanomaterials inside the VERO cells. This research evaluates the ability of the ECIS system compared to those of conventional methods in analyzing the cytotoxicity of AuNPs and SWCNTs with higher sensitivity under real-time conditions.

  18. [A study of coordinates transform iterative fitting method to extract bio-impedance model parameters bio-impedance model parameters].

    PubMed

    Zhou, Liming; Yang, Yuxing; Yuan, Shiying

    2006-02-01

    A new algorithm, the coordinates transform iterative optimizing method based on the least square curve fitting model, is presented. This arithmetic is used for extracting the bio-impedance model parameters. It is superior to other methods, for example, its speed of the convergence is quicker, and its calculating precision is higher. The objective to extract the model parameters, such as Ri, Re, Cm and alpha, has been realized rapidly and accurately. With the aim at lowering the power consumption, decreasing the price and improving the price-to-performance ratio, a practical bio-impedance measure system with double CPUs has been built. It can be drawn from the preliminary results that the intracellular resistance Ri increased largely with an increase in working load during sitting, which reflects the ischemic change of lower limbs.

  19. Probability density estimation using isocontours and isosurfaces: applications to information-theoretic image registration.

    PubMed

    Rajwade, Ajit; Banerjee, Arunava; Rangarajan, Anand

    2009-03-01

    We present a new, geometric approach for determining the probability density of the intensity values in an image. We drop the notion of an image as a set of discrete pixels, and assume a piecewise-continuous representation. The probability density can then be regarded as being proportional to the area between two nearby isocontours of the image surface. Our paper extends this idea to joint densities of image pairs. We demonstrate the application of our method to affine registration between two or more images using information theoretic measures such as mutual information. We show cases where our method outperforms existing methods such as simple histograms, histograms with partial volume interpolation, Parzen windows, etc. under fine intensity quantization for affine image registration under significant image noise. Furthermore, we demonstrate results on simultaneous registration of multiple images, as well as for pairs of volume datasets, and show some theoretical properties of our density estimator. Our approach requires the selection of only an image interpolant. The method neither requires any kind of kernel functions (as in Parzen windows) which are unrelated to the structure of the image in itself, nor does it rely on any form of sampling for density estimation.

  20. Probability Density Estimation Using Isocontours and Isosurfaces: Application to Information-Theoretic Image Registration

    PubMed Central

    Rajwade, Ajit; Banerjee, Arunava; Rangarajan, Anand

    2010-01-01

    We present a new geometric approach for determining the probability density of the intensity values in an image. We drop the notion of an image as a set of discrete pixels and assume a piecewise-continuous representation. The probability density can then be regarded as being proportional to the area between two nearby isocontours of the image surface. Our paper extends this idea to joint densities of image pairs. We demonstrate the application of our method to affine registration between two or more images using information-theoretic measures such as mutual information. We show cases where our method outperforms existing methods such as simple histograms, histograms with partial volume interpolation, Parzen windows, etc., under fine intensity quantization for affine image registration under significant image noise. Furthermore, we demonstrate results on simultaneous registration of multiple images, as well as for pairs of volume data sets, and show some theoretical properties of our density estimator. Our approach requires the selection of only an image interpolant. The method neither requires any kind of kernel functions (as in Parzen windows), which are unrelated to the structure of the image in itself, nor does it rely on any form of sampling for density estimation. PMID:19147876

  1. Graph theoretic framework based cooperative control and estimation of multiple UAVs for target tracking

    NASA Astrophysics Data System (ADS)

    Ahmed, Mousumi

    Designing the control technique for nonlinear dynamic systems is a significant challenge. Approaches to designing a nonlinear controller are studied and an extensive study on backstepping based technique is performed in this research with the purpose of tracking a moving target autonomously. Our main motivation is to explore the controller for cooperative and coordinating unmanned vehicles in a target tracking application. To start with, a general theoretical framework for target tracking is studied and a controller in three dimensional environment for a single UAV is designed. This research is primarily focused on finding a generalized method which can be applied to track almost any reference trajectory. The backstepping technique is employed to derive the controller for a simplified UAV kinematic model. This controller can compute three autopilot modes i.e. velocity, ground heading (or course angle), and flight path angle for tracking the unmanned vehicle. Numerical implementation is performed in MATLAB with the assumption of having perfect and full state information of the target to investigate the accuracy of the proposed controller. This controller is then frozen for the multi-vehicle problem. Distributed or decentralized cooperative control is discussed in the context of multi-agent systems. A consensus based cooperative control is studied; such consensus based control problem can be viewed from the algebraic graph theory concepts. The communication structure between the UAVs is represented by the dynamic graph where UAVs are represented by the nodes and the communication links are represented by the edges. The previously designed controller is augmented to account for the group to obtain consensus based on their communication. A theoretical development of the controller for the cooperative group of UAVs is presented and the simulation results for different communication topologies are shown. This research also investigates the cases where the communication

  2. Fecundity estimation by oocyte packing density formulae in determinate and indeterminate spawners: Theoretical considerations and applications

    NASA Astrophysics Data System (ADS)

    Kurita, Yutaka; Kjesbu, Olav S.

    2009-02-01

    This paper explores why the 'Auto-diametric method', currently used in many laboratories to quickly estimate fish fecundity, works well on marine species with a determinate reproductive style but much less so on species with an indeterminate reproductive style. Algorithms describing links between potentially important explanatory variables to estimate fecundity were first established, and these were followed by practical observations in order to validate the method under two extreme situations: 1) straightforward fecundity estimation in a determinate, single-batch spawner: Atlantic herring (AH) Clupea harengus and 2) challenging fecundity estimation in an indeterminate, multiple-batch spawner: Japanese flounder (JF) Paralichthys olivaceus. The Auto-diametric method relies on the successful prediction of the number of vitellogenic oocytes (VTO) per gram ovary (oocyte packing density; OPD) from the mean VTO diameter. Theoretically, OPD could be reproduced by the following four variables; OD V (volume-based mean VTO diameter, which deviates from arithmetic mean VTO diameter), VFvto (volume fraction of VTO in the ovary), ρo (specific gravity of the ovary) and k (VTO shape, i.e. ratio of long and short oocyte axes). VF vto, ρ o and k were tested in relation to growth in OD V. The dynamic range throughout maturation was clearly highest in VF vto. As a result, OPD was mainly influenced by OD V and secondly by VFvto. Log (OPD) for AH decreased as log (OD V) increased, while log (OPD) for JF first increased during early vitellogenesis, then decreased during late vitellogenesis and spawning as log (OD V) increased. These linear regressions thus behaved statistically differently between species, and associated residuals fluctuated more for JF than for AH. We conclude that the OPD-OD V relationship may be better expressed by several curves that cover different parts of the maturation cycle rather than by one curve that cover all these parts. This seems to be particularly

  3. Error correction algorithm for high accuracy bio-impedance measurement in wearable healthcare applications.

    PubMed

    Kubendran, Rajkumar; Lee, Seulki; Mitra, Srinjoy; Yazicioglu, Refet Firat

    2014-04-01

    Implantable and ambulatory measurement of physiological signals such as Bio-impedance using miniature biomedical devices needs careful tradeoff between limited power budget, measurement accuracy and complexity of implementation. This paper addresses this tradeoff through an extensive analysis of different stimulation and demodulation techniques for accurate Bio-impedance measurement. Three cases are considered for rigorous analysis of a generic impedance model, with multiple poles, which is stimulated using a square/sinusoidal current and demodulated using square/sinusoidal clock. For each case, the error in determining pole parameters (resistance and capacitance) is derived and compared. An error correction algorithm is proposed for square wave demodulation which reduces the peak estimation error from 9.3% to 1.3% for a simple tissue model. Simulation results in Matlab using ideal RC values show an average accuracy of for single pole and for two pole RC networks. Measurements using ideal components for a single pole model gives an overall and readings from saline phantom solution (primarily resistive) gives an . A Figure of Merit is derived based on ability to accurately resolve multiple poles in unknown impedance with minimal measurement points per decade, for given frequency range and supply current budget. This analysis is used to arrive at an optimal tradeoff between accuracy and power. Results indicate that the algorithm is generic and can be used for any application that involves resolving poles of an unknown impedance. It can be implemented as a post-processing technique for error correction or even incorporated into wearable signal monitoring ICs.

  4. The bioimpedance analysis of a parenchyma of a liver in the conditions of its extensive resection in experiment

    NASA Astrophysics Data System (ADS)

    Agibalov, D. Y.; Panchenkov, D. N.; Chertyuk, V. B.; Leonov, S. D.; Astakhov, D. A.

    2017-01-01

    The liver failure which is result of disharmony of functionality of a liver to requirements of an organism is the main reason for unsatisfactory results of an extensive resection of a liver. However, uniform effective criterion of definition of degree of a liver failure it isn’t developed now. One of data acquisition methods about a morfo-functional condition of internals is the bioimpedance analysis (BIA) based on impedance assessment (full electric resistance) of a biological tissue. Measurements of an impedance are used in medicine and biology for the characteristic of physical properties of living tissue, studying of the changes bound to a functional state and its structural features. In experimental conditions we carried out an extensive resection of a liver on 27 white laboratory rats of the Vistar line. The comparative characteristic of data of a bioimpedansometriya in intraoperative and after the operational period with the main existing methods of assessment of a functional condition of a liver was carried out. By results of the work performed by us it is possible to claim that the bioimpedance analysis of a liver on the basis of an invasive bioimpedansometriya allows to estimate morphological features and functional activity of a liver before performance of an extensive resection of a liver. The data obtained during scientific work are experimental justification for use of an impedansometriya during complex assessment of functional reserves of a liver. Preliminary data of clinical approbation at a stage of introduction of a technique speak about rather high informational content of a bioimpedansometriya. The subsequent analysis of efficiency of the invasive bioimpedance analysis of a liver requires further accumulation of clinical data. However even at this stage the method showed the prospect for further use in clinical surgical hepathology.

  5. A theoretical estimate for nucleotide sugar demand towards Chinese Hamster Ovary cellular glycosylation

    PubMed Central

    del Val, Ioscani Jimenez; Polizzi, Karen M.; Kontoravdi, Cleo

    2016-01-01

    Glycosylation greatly influences the safety and efficacy of many of the highest-selling recombinant therapeutic proteins (rTPs). In order to define optimal cell culture feeding strategies that control rTP glycosylation, it is necessary to know how nucleotide sugars (NSs) are consumed towards host cell and rTP glycosylation. Here, we present a theoretical framework that integrates the reported glycoproteome of CHO cells, the number of N-linked and O-GalNAc glycosylation sites on individual host cell proteins (HCPs), and the carbohydrate content of CHO glycosphingolipids to estimate the demand of NSs towards CHO cell glycosylation. We have identified the most abundant N-linked and O-GalNAc CHO glycoproteins, obtained the weighted frequency of N-linked and O-GalNAc glycosites across the CHO cell proteome, and have derived stoichiometric coefficients for NS consumption towards CHO cell glycosylation. By combining the obtained stoichiometric coefficients with previously reported data for specific growth and productivity of CHO cells, we observe that the demand of NSs towards glycosylation is significant and, thus, is required to better understand the burden of glycosylation on cellular metabolism. The estimated demand of NSs towards CHO cell glycosylation can be used to rationally design feeding strategies that ensure optimal and consistent rTP glycosylation. PMID:27345611

  6. Estimation-theoretic approach to delayed decoding of predictively encoded video sequences.

    PubMed

    Han, Jingning; Melkote, Vinay; Rose, Kenneth

    2013-03-01

    Current video coders employ predictive coding with motion compensation to exploit temporal redundancies in the signal. In particular, blocks along a motion trajectory are modeled as an auto-regressive (AR) process, and it is generally assumed that the prediction errors are temporally independent and approximate the innovations of this process. Thus, zero-delay encoding and decoding is considered efficient. This paper is premised on the largely ignored fact that these prediction errors are, in fact, temporally dependent due to quantization effects in the prediction loop. It presents an estimation-theoretic delayed decoding scheme, which exploits information from future frames to improve the reconstruction quality of the current frame. In contrast to the standard decoder that reproduces every block instantaneously once the corresponding quantization indices of residues are available, the proposed delayed decoder efficiently combines all accessible (including any future) information in an appropriately derived probability density function, to obtain the optimal delayed reconstruction per transform coefficient. Experiments demonstrate significant gains over the standard decoder. Requisite information about the source AR model is estimated in a spatio-temporally adaptive manner from a bit-stream conforming to the H.264/AVC standard, i.e., no side information needs to be sent to the decoder in order to employ the proposed approach, thereby compatibility with the standard syntax and existing encoders is retained.

  7. Comparison of two bioimpedance spectroscopy techniques in the assessment of body fluid volumes.

    PubMed

    Neves, E B; Pino, A V; Souza, M N

    2009-01-01

    The present study aimed to compare the estimates of body liquid volumes performed by two bioimpedance spectrometry techniques. One based on a step response technique (BIS-PEB) and second one based on multifrequency Xitron Hydra 4200 equipment (Xitron Technologies, San Diego, CA, USA). The convenience sample was initially composed of 422 students from a military parachuting course of the Brazilian Army. From such sample 42 male students were randomly selected to be evaluated during three weeks. The anthropometrical characteristics of the sample can be summarized as: 25.18 +/- 4.10 years old; weight equals of 76.77 +/- 7.84 kg; height equals to 174.96 +/- 5.67 cm; body mass index (BMI) equal to 25.05 +/- 2.11 kg m(-2). Bland-Altman graphics were used to compare the two methods in what concerns to estimate of extracellular fluid (ECF), intracellular fluid (ICF), and total body water (TBW). One can observe that the estimates of the two techniques present a good correlation, especially in the case of ECF (r = 0.975). The present study indicates that BIS-PEB technique associated with De Lorenzo equation can supply noninvasive estimates of body fluid volumes comparable to Xitron Hydra 4200 equipment.

  8. Detecting and estimating signals in noisy cable structures, II: information theoretical analysis.

    PubMed

    Manwani, A; Koch, C

    1999-11-15

    This is the second in a series of articles that seek to recast classical single-neuron biophysics in information-theoretical terms. Classical cable theory focuses on analyzing the voltage or current attenuation of a synaptic signal as it propagates from its dendritic input location to the spike initiation zone. On the other hand, we are interested in analyzing the amount of information lost about the signal in this process due to the presence of various noise sources distributed throughout the neuronal membrane. We use a stochastic version of the linear one-dimensional cable equation to derive closed-form expressions for the second-order moments of the fluctuations of the membrane potential associated with different membrane current noise sources: thermal noise, noise due to the random opening and closing of sodium and potassium channels, and noise due to the presence of "spontaneous" synaptic input. We consider two different scenarios. In the signal estimation paradigm, the time course of the membrane potential at a location on the cable is used to reconstruct the detailed time course of a random, band-limited current injected some distance away. Estimation performance is characterized in terms of the coding fraction and the mutual information. In the signal detection paradigm, the membrane potential is used to determine whether a distant synaptic event occurred within a given observation interval. In the light of our analytical results, we speculate that the length of weakly active apical dendrites might be limited by the information loss due to the accumulated noise between distal synaptic input sites and the soma and that the presence of dendritic nonlinearities probably serves to increase dendritic information transfer.

  9. Laser biostimulation of wound healing: bioimpedance measurements support histology.

    PubMed

    Solmaz, Hakan; Dervisoglu, Sergulen; Gulsoy, Murat; Ulgen, Yekta

    2016-11-01

    Laser biostimulation in medicine has become widespread supporting the idea of therapeutic effects of photobiomodulation in biological tissues. The aim of this study was to investigate the biostimulation effect of laser irradiation on healing of cutaneous skin wounds, in vivo, by means of bioimpedance measurements and histological examinations. Cutaneous skin wounds on rats were subjected to 635 nm diode laser irradiations at two energy densities of 1 and 3 J/cm(2) separately. Changes in the electrical properties of the wound sites were examined with multi-frequency electrical impedance measurements performed on the 3rd, 7th, 10th, and 14th days following the wounding. Tissue samples were both morphologically and histologically examined to determine the relationship between electrical properties and structure of tissues during healing. Laser irradiations of both energy densities stimulated the wound healing process. In particular, laser irradiation of lower energy density had more evidence especially for the first days of healing process. On the 7th day of healing, 3 J/cm(2) laser-irradiated tissues had significantly smaller wound areas compared to non-irradiated wounds (p < 0.05). The electrical impedance results supported the idea of laser biostimulation on healing of cutaneous skin wounds. Thus, bioimpedance measurements may be considered as a non-invasive supplementary method for following the healing process of laser-irradiated tissues.

  10. Detection of fruit quality based on bioimpedance using probe electrodes

    NASA Astrophysics Data System (ADS)

    Li, Qunhe; Wang, Jianping; Ye, Zunzhong; Ying, Yibin; Li, Yanbin

    2005-11-01

    The fruit impedance is related to fruit internal quality. Differences in impedance of various quality of apples were measured by use of solartron 1294 impedance interface and solartron 1260 impedance analyzer instrument, four brass-wire probes electrodes, personal computer and the software of bioimpedance measurement of fruits. Apples as examples were experimented at frequency range (1 Hz ~ 1M Hz), and the relation between their impedance parameters and their quality was studied. The results of experiment indicate that the increasing of the frequency could constantly lead to the decreasing of the impedance. When the frequency was increased from 1Hz to 1MHz, the two points impedance of apples' surface decreased 12-15 times. The impedance of good quality apples was nearly constant at both low and high frequency. When detected the rot apples, the impedance was similar to the good apples' at high frequency, and different at low frequency. It was 60-100 ohm lower than normal for the rot apple detection of various rot area. At 1Hz frequency, the impedance measured of 10cm2 rot area was 397ohm, which was about 200ohm lower than normal. The conclusion is it feasible that the bioimpedance can be adopted to distinguishing the internal quality of fruit.

  11. Bioimpedance measurements of human body composition: critical analysis and outlook.

    PubMed

    Matthie, James R

    2008-03-01

    Bioimpedance spectroscopy represents one of the largest emerging medical device technologies. The method is generally known as impedance spectroscopy and is an inexpensive, yet extremely powerful, analytical technique for studying the electrical properties of materials. Much of what we know about biological cells and tissues comes from use of this technique in vitro. Due to the high impedance of the cell membrane, current flow through the cell is frequency dependent and this allows the fluid volume inside versus outside the body's cells to be determined. The fluid outside the cells is primarily related to fluid volume status while the intracellular fluid also relates to the body's cellular mass. Technical advances have removed much of the method's basic complexities. The first commercial bioimpedance spectroscopy device for in vivo human body composition studies was introduced in 1990. Major strides have been made and the method is now poised to enter mainstream clinical medicine but the field is only in its infancy. This paper attempts to fully describe the current use of impedance in the body composition field.

  12. A Bioimpedance Analysis Platform for Amputee Residual Limb Assessment

    PubMed Central

    Sanders, JE; Moehring, MM; Rothlisberger, TM; Phillips, RH; Hartley, T; Dietrich, CR; Redd, CB; Gardner, DW; Cagle, JC

    2016-01-01

    Objective The objective of this research was to develop a bioimpedance platform for monitoring fluid volume in residual limbs of people with trans-tibial limb loss using prostheses. Methods A customized multi-frequency current stimulus profile was sent to thin flat electrodes positioned on the thigh and distal residual limb. The applied current signal and sensed voltage signals from four pairs of electrodes located on the anterior and posterior surfaces were demodulated into resistive and reactive components. An established electrical model (Cole) and segmental limb geometry model were used to convert results to extracellular and intracellular fluid volumes. Bench tests and testing on amputee participants were conducted to optimize the stimulus profile and electrode design and layout. Results The proximal current injection electrode needed to be at least 25 cm from the proximal voltage sensing electrode. A thin layer of hydrogel needed to be present during testing to ensure good electrical coupling. Using a burst duration of 2.0 ms, intermission interval of 100 μs, and sampling delay of 10 μs at each of 24 frequencies except 5 kHz which required a 200 μs sampling delay, the system achieved a sampling rate of 19.7 Hz. Conclusion The designed bioimpedance platform allowed system settings and electrode layouts and positions to be optimized for amputee limb fluid volume measurement. Significance The system will be useful towards identifying and ranking prosthetic design features and participant characteristics that impact residual limb fluid volume. PMID:26595906

  13. Theoretical estimation of systematic errors in local deformation measurements using digital image correlation

    NASA Astrophysics Data System (ADS)

    Xu, Xiaohai; Su, Yong; Zhang, Qingchuan

    2017-01-01

    The measurement accuracy using the digital image correlation (DIC) method in local deformations such as the Portevin-Le Chatelier bands, the deformations near the gap, and the crack tips has raised a major concern. The measured displacement and strain results are heavily affected by the calculation parameters (such as the subset size, the grid step, and the strain window size) due to under-matched shape functions (for displacement measurement) and surface fitting functions (for strain calculation). To evaluate the systematic errors in local deformations, theoretical estimations and approximations of displacement and strain systematic errors have been deduced when the first-order shape functions and quadric surface fitting functions are employed. The following results come out: (1) the approximate displacement systematic errors are proportional to the second-order displacement gradients and the ratio is only determined by the subset size; (2) the approximate strain systematic errors are functions of the third-order displacement gradients and the coefficients are dependent on the subset size, the grid step and the strain window size. Simulated experiments have been carried out to verify the reliability. Besides, a convenient way by comparing displacement results measured by the DIC method with different subset sizes is proposed to approximately evaluate the displacement systematic errors.

  14. A Theoretical Mathematical Model to Estimate Blood Volume in Clinical Practice.

    PubMed

    D'Angelo, Matthew; Hodgen, R Kyle; Wofford, Kenneth; Vacchiano, Charles

    2015-10-01

    Perioperative intravenous (IV) fluid management is controversial. Fluid therapy is guided by inaccurate algorithms and changes in the patient's vital signs that are nonspecific for changes to the patient's blood volume (BV). Anesthetic agents, patient comorbidities, and surgical techniques interact and further confound clinical assessment of volume status. Through adaptation of existing acute normovolemic hemodilution algorithms, it may be possible to predict patient's BV by measuring hematocrit (HcT) before and after hemodilution. Our proposed mathematical model requires the following four data points to estimate a patient's total BV: ideal BV, baseline HcT, a known fluid bolus (FB), and a second HcT following the FB. To test our method, we obtained 10 ideal and 10 actual subject BV data measures from 9 unique subjects derived from a commercially used Food and Drug Administration-approved, semi-automated, BV analyzer. With these data, we calculated the theoretical BV change following a FB. Using the four required data points, we predicted BVs (BVp) and compared our predictions with the actual BV (BVa) measures provided by the data set. The BVp calculated using our model highly correlated with the BVa provided by the BV analyzer data set (df = 8, r = .99). Our calculations suggest that, with accurate HcT measurement, this method shows promise for the identification of abnormal BV states such as hyper- and hypovolemia and may prove to be a reliable method for titrating IV fluid.

  15. Theoretical Estimation of the Acoustic Energy Generation and Absorption Caused by Jet Oscillation

    NASA Astrophysics Data System (ADS)

    Takahashi, Kin'ya; Iwagami, Sho; Kobayashi, Taizo; Takami, Toshiya

    2016-04-01

    We investigate the energy transfer between the fluid field and acoustic field caused by a jet driven by an acoustic particle velocity field across it, which is the key to understanding the aerodynamic sound generation of flue instruments, such as the recorder, flute, and organ pipe. Howe's energy corollary allows us to estimate the energy transfer between these two fields. For simplicity, we consider the situation such that a free jet is driven by a uniform acoustic particle velocity field across it. We improve the semi-empirical model of the oscillating jet, i.e., exponentially growing jet model, which has been studied in the field of musical acoustics, and introduce a polynomially growing jet model so as to apply Howe's formula to it. It is found that the relative phase between the acoustic oscillation and jet oscillation, which changes with the distance from the flue exit, determines the quantity of the energy transfer between the two fields. The acoustic energy is mainly generated in the downstream area, but it is consumed in the upstream area near the flue exit in driving the jet. This theoretical examination well explains the numerical calculation of Howe's formula for the two-dimensional flue instrument model in our previous work [http://doi.org/10.1088/0169-5983/46/6/061411, Fluid Dyn. Res. 46, 061411 (2014)] as well as the experimental result of Yoshikawa et al. [http://doi.org/10.1016/j.jsv.2012.01.026, J. Sound Vib. 331, 2558 (2012)].

  16. PREFACE: First Latin-American Conference on Bioimpedance (CLABIO 2012)

    NASA Astrophysics Data System (ADS)

    Bertemes Filho, Pedro

    2012-12-01

    The past decade has witnessed an unprecedented growth in medical technologies and a new generation of diagnostics, characterized by mobility, virtualization, homecare and costs. The ever growing demand and the rapid need for low cost tools for characterizing human tissue, and supporting intelligence and technologies for non-invasive tissue cancer investigation raise unique and evolving opportunities for research in Electrical Bioimpedance. The CLABIO2012 - First Latin American Conference on Bioimpedance is a premier Latin-American conference on Bioimpedance for research groups working on Electrical Bioimpedance. It allows Latin American researchers to share their experiences with other groups from all over the world by presenting scientific work and potential innovations in this research area and also in the social events promoting informal get togethers in the Brazilian style. The work covers a broad range including Biomedical Engineering and Computing, Medical Physics and Medical Sciences, Environment, Biology and Chemistry. Also, the Conference is intended to give students and research groups the opportunity to learn more about Bioimpedance as an important tool in biological material characterization and also in diagnosis. The conference is designed to showcase cutting edge research and accomplishments, and to enrich the educational and industrial experience in this field. It also represents a unique opportunity to meet colleagues and friends, exchanging ideas, and learning about new developments and best practice, while working to advance the understanding of the knowledge base that we will collectively draw upon in the years ahead to meet future challenges. Participants will attend presentations by scholars representing both institutes and academia. The CLABIO2012 proceedings include over 25 papers selected via a peer review process. The conference program features tutorial talks by world-leading scholars and five sessions for regular paper oral presentations

  17. Theoretical estimate on tensor-polarization asymmetry in proton-deuteron Drell-Yan process

    NASA Astrophysics Data System (ADS)

    Kumano, S.; Song, Qin-Tao

    2016-09-01

    Tensor-polarized parton distribution functions are new quantities in spin-1 hadrons such as the deuteron, and they could probe new quark-gluon dynamics in hadron and nuclear physics. In charged-lepton deep inelastic scattering, they are studied by the twist-2 structure functions b1 and b2. The HERMES Collaboration found unexpectedly large b1 values compared to a naive theoretical expectation based on the standard deuteron model. The situation should be significantly improved in the near future by an approved experiment to measure b1 at Thomas Jefferson National Accelerator Facility (JLab). There is also an interesting indication in the HERMES result that finite antiquark tensor polarization exists. It could play an important role in solving a mechanism on tensor structure in the quark-gluon level. The tensor-polarized antiquark distributions are not easily determined from the charged-lepton deep inelastic scattering; however, they can be measured in a proton-deuteron Drell-Yan process with a tensor-polarized deuteron target. In this article, we estimate the tensor-polarization asymmetry for a possible Fermilab Main-Injector experiment by using optimum tensor-polarized parton distribution functions to explain the HERMES measurement. We find that the asymmetry is typically a few percent. If it is measured, it could probe new hadron physics, and such studies could create an interesting field of high-energy spin physics. In addition, we find that a significant tensor-polarized gluon distribution should exist due to Q2 evolution, even if it were zero at a low Q2 scale. The tensor-polarized gluon distribution has never been observed, so it is an interesting future project.

  18. A high accuracy broadband measurement system for time resolved complex bioimpedance measurements.

    PubMed

    Kaufmann, S; Malhotra, A; Ardelt, G; Ryschka, M

    2014-06-01

    Bioimpedance measurements are useful tools in biomedical engineering and life science. Bioimpedance is the electrical impedance of living tissue and can be used in the analysis of various physiological parameters. Bioimpedance is commonly measured by injecting a small well known alternating current via surface electrodes into an object under test and measuring the resultant surface voltages. It is non-invasive, painless and has no known hazards. This work presents a field programmable gate array based high accuracy broadband bioimpedance measurement system for time resolved bioimpedance measurements. The system is able to measure magnitude and phase of complex impedances under test in a frequency range of about 10-500 kHz with excitation currents from 10 µA to 5 mA. The overall measurement uncertainties stay below 1% for the impedance magnitude and below 0.5° for the phase in most measurement ranges. Furthermore, the described system has a sample rate of up to 3840 impedance spectra per second. The performance of the bioimpedance measurement system is demonstrated with a resistor based system calibration and with measurements on biological samples.

  19. Application of longitudinal and transversal bioimpedance measurements in peritoneal dialysis at 50 kHz

    NASA Astrophysics Data System (ADS)

    Nescolarde, L.; Doñate, T.; Casañas, R.; Rosell-Ferrer, J.

    2010-04-01

    More relevant information of the fluid changes in peritoneal dialysis (PD) might be obtained with segmental bioimpedance measurements rather than whole-body measurement, who hidden information of body composition. Whole-body and segmental bioimpedance measurements were obtained using 5 configurations (whole-body or right-side (RS), longitudinal-leg (L-LEG), longitudinal-abdomen (L-AB), transversal-abdomen (T-AB), and transversal-leg (T-LEG)) in 20 patients: 15 males (56.5 ± 9.4 yr, 24.2 ± 4.2 kg/m2) and 5 females (58.4 ± 7.1 yr, 28.2 ± 5.9 kg/m2) in peritoneal dialysis (PD). The aim of this study is to analyze the relationship between whole-body, longitudinal-segmental (L-LEG and L-AB) and transversal-segmental (TAB and TLEG) bioimpedance measurement at 50 kHz, with clinical parameters of cardiovascular risk, dyslipidemia, nutrition and hydration. The Kolmogorov-Smirnov test was used for the normality test of all variables. Longitudinal bioimpedance parameters were normalized by the height of the patients. The Spearman correlation was used to analyze the correlation between bioimpedance and clinical parameters. The statistical significance was considered with P < 0.05. Transversal bioimpedance measurements have higher correlation with clinical parameters than longitudinal measurements.

  20. Electrical Bioimpedance Analysis: A New Method in Cervical Cancer Screening

    PubMed Central

    Das, Soumen; Chatterjee, Jyotirmoy

    2015-01-01

    Cervical cancer is the second most common female cancer worldwide and a disease of concern due to its high rate of incidence of about 500,000 women annually and is responsible for about 280,000 deaths in a year. The mortality and morbidity of cervical cancer are reduced through mass screening via Pap smear, but this technique suffers from very high false negativity of around 30% to 40% and hence the sensitivity of this technique is not more than 60%. Electrical bioimpedance study employing cytosensors over a frequency range offers instantaneous and quantitative means to monitor cellular events and is an upcoming technique in real time to classify cells as normal and abnormal ones. This technology is exploited for label-free detection of diseases by identifying and measuring nonbiological parameters of the cell which may carry the disease signature. PMID:27006939

  1. Circular motion analysis of time-varying bioimpedance.

    PubMed

    Sanchez, B; Louarroudi, E; Rutkove, S B; Pintelon, R

    2015-11-01

    This paper presents a step forward towards the analysis of a linear periodically time-varying (PTV) bioimpedance ZPTV(jw, t), which is an important subclass of a linear time-varying (LTV) bioimpedance. Similarly to the Fourier coefficients of a periodic signal, a PTV impedance can be decomposed into frequency dependent impedance phasors, [Formula: see text], that are rotating with an angular speed of wr = 2πr/TZ. The vector length of these impedance phasors corresponds to the amplitude of the rth-order harmonic impedance |Zr( jw)| and the initial phase is given by Φr(w, t0) = [Symbol: see text]Zr( jw) + 2πrt0/TZ, with t0∈[0, T] being a time instant within the measurement time T. The impedance period TZ stands for the cycle length of the bio-system under investigation; for example, the elapsed time between two consecutive R-waves in the electrocardiogram or the breathing periodicity in case of the heart or lungs, respectively. First, it is demonstrated that the harmonic impedance phasor [Formula: see text], at a particular measured frequency k, can be represented by a rotating phasor, leading to the so-called circular motion analysis technique. Next, the two dimensional (2D) representation of the harmonic impedance phasors is then extended to a three-dimensional (3D) coordinate system by taking into account the frequency dependence. Finally, we introduce a new visualizing tool to summarize the frequency response behavior of ZPTV( jw, t) into a single 3D plot using the local Frenet-Serret frame. This novel 3D impedance representation is then compared with the 3D Nyquist representation of a PTV impedance. The concepts are illustrated through real measurements conducted on a PTV RC-circuit.

  2. The limits of crop productivity: validating theoretical estimates and determining the factors that limit crop yields in optimal environments

    NASA Technical Reports Server (NTRS)

    Bugbee, B.; Monje, O.

    1992-01-01

    Plant scientists have sought to maximize the yield of food crops since the beginning of agriculture. There are numerous reports of record food and biomass yields (per unit area) in all major crop plants, but many of the record yield reports are in error because they exceed the maximal theoretical rates of the component processes. In this article, we review the component processes that govern yield limits and describe how each process can be individually measured. This procedure has helped us validate theoretical estimates and determine what factors limit yields in optimal environments.

  3. Integrated cervical smear screening using liquid based cytology and bioimpedance analysis

    PubMed Central

    Das, Lopamudra; Sarkar, Tandra; Maiti, Ashok K.; Naskar, Sukla; Das, Soumen; Chatterjee, Jyotirmoy

    2014-01-01

    Objective: To minimize the false negativity in cervical cancer screening with Papanicolaou (Pap) test, there is a need to explore novel cytological technique and identification of unique and important cellular features from the perspectives of morphological as well as biophysical properties. Materials and Methods: The present study explores the feasibility of low-cost cervical monolayer techniques in extracting cyto-pathological features to classify normal and abnormal conditions. The cervical cells were also analyzed in respect to their electrical bioimpedance. Result: The results show that newly developed monolayer technique for cervical smears is cost effective, capable of cyto-pathological evaluation. Electrical bioimpedance study evidenced distinction between abnormal and normal cell population at more than two order of magnitude difference. Conclusion: The integration of bioimpedance observation along with the proposed low-cost monolayer technology could increase the efficiency of the cervical screening to a greater extent thereby reducing the rates of faulty diagnosis. PMID:25745281

  4. The MOM tunneling diode - Theoretical estimate of its performance at microwave and infrared frequencies

    NASA Technical Reports Server (NTRS)

    Sanchez, A.; Davis, C. F., Jr.; Liu, K. C.; Javan, A.

    1978-01-01

    A theoretical analysis of the metal-oxide-metal (MOM) antenna/diode as a detector of microwave and infrared radiation is presented with the experimental verification conducted in the far infrared. It is shown that the detectivity at room temperature can be as high as 10 to the 10th per W Hz exp 1/2 at frequencies of 10 to the 14th Hz in the infrared. As a result, design guidelines are obtained for the lithographic fabrication of thin-film MOM structures that are to operate in the 10-micron region of the infrared spectrum.

  5. Diuretics and bioimpedance-measured fluid spaces in hypertensive patients.

    PubMed

    Tapolyai, Mihály; Faludi, Mária; Dossabhoy, Neville R; Barna, István; Lengvárszky, Zsolt; Szarvas, Tibor; Berta, Klára; Fülöp, Tibor

    2014-12-01

    The authors examined the relationship between thiazide-type diuretics and fluid spaces in a cohort of hypertensive patients in a retrospective study of 60 stable hypertensive patients without renal abnormalities who underwent whole-body bioimpedance analysis. Overhydration was greater in the diuretic group, but only to a nonsignificant degree (5.9 vs. 2.9%; P=.21). The total body water did not differ in the two groups (41.8 L vs. 40.5 L; P=.64). Extracellular fluid volume (ECV) (19.7 L vs. 18.5 L; P=.35) and intracellular fluid volume (ICV) spaces (20.8 L vs. 21.3 L; P=.75) were also not significantly different in the two groups. The ratio of ICV:ECV, however, appeared different: 1.05 vs 1.15 (P=.017) and the effect was maintained in the linear regression-adjusted model (β coefficient: -0.143; P=.001). The diuretic-related distortion of ICV:ECV ratio indicates potential fluid redistribution in hypertensive patients, with ICV participating in the process.

  6. Bioimpedance Measurement of Segmental Fluid Volumes and Hemodynamics

    NASA Technical Reports Server (NTRS)

    Montgomery, Leslie D.; Wu, Yi-Chang; Ku, Yu-Tsuan E.; Gerth, Wayne A.; DeVincenzi, D. (Technical Monitor)

    2000-01-01

    Bioimpedance has become a useful tool to measure changes in body fluid compartment volumes. An Electrical Impedance Spectroscopic (EIS) system is described that extends the capabilities of conventional fixed frequency impedance plethysmographic (IPG) methods to allow examination of the redistribution of fluids between the intracellular and extracellular compartments of body segments. The combination of EIS and IPG techniques was evaluated in the human calf, thigh, and torso segments of eight healthy men during 90 minutes of six degree head-down tilt (HDT). After 90 minutes HDT the calf and thigh segments significantly (P < 0.05) lost conductive volume (eight and four percent, respectively) while the torso significantly (P < 0.05) gained volume (approximately three percent). Hemodynamic responses calculated from pulsatile IPG data also showed a segmental pattern consistent with vascular fluid loss from the lower extremities and vascular engorgement in the torso. Lumped-parameter equivalent circuit analyses of EIS data for the calf and thigh indicated that the overall volume decreases in these segments arose from reduced extracellular volume that was not completely balanced by increased intracellular volume. The combined use of IPG and EIS techniques enables noninvasive tracking of multi-segment volumetric and hemodynamic responses to environmental and physiological stresses.

  7. Bioimpedance for the spot measurement of tissue density

    NASA Astrophysics Data System (ADS)

    Dylke, E. S.; Ward, L. C.; Stannard, C.; Leigh, A.; Kilbreath, S. L.

    2013-04-01

    Long-standing lymphoedema is characterised by tissues changes which are currently not detectable using bioimpedance spectroscopy. It has been suggested that a combination of bipolar and tetrapolar measurements may be used to detect these tissues changes for a single site in the transverse direction. This was technique was trialled in a group of control participants with no history of lymphoedema or recent upper limb trauma. Repeated spot measurements were done without removal of electrodes to determine biological variability as well as with removal of electrodes to determine technical reproducibility. The inter-limb spot ratio of the controls was then compared to that of a number of women previously diagnosed with secondary lymphoedema in the forearm. Biological variability was not found to greatly influence repeated measures but only moderate technical reliability was found despite excellent co-efficient of variation for the majority of the measurements. A difference was seen between those with more severe swelling and the controls. This novel technique shows promise in detecting tissue changes associated with long-standing lymphoedema.

  8. Theoretical estimates of photoproduction cross sections for neutral subthreshold pions in carbon-carbon collisions.

    PubMed

    Norbury, J W; Townsend, L W

    1986-01-01

    Using the Weizsacher-Williams method of virtual quanta, total cross section estimates for the photoproduction of neutral subthreshold pions in carbon-carbon collisions at incident energies below 300 MeV/nucleon are made. Comparisons with recent experimental data indicate that the photoproduction mechanism makes an insignificant contribution to these measured cross sections.

  9. Estimation of acoustical streaming: theoretical model, Doppler measurements and optical visualisation.

    PubMed

    Nowicki, A; Kowalewski, T; Secomski, W; Wójcik, J

    1998-02-01

    An approximate solution for the streaming velocity generated by flat and weakly focused transducers was derived by directly solving the Dirichlet boundary conditions for the Poisson equation, the solution of the Navier-Stokes equation for the axial components of the streaming velocity. The theoretical model was verified experimentally using a 32 MHz pulsed Doppler unit. The experimental acoustical fields were produced by three different 4 mm diameter flat and focused transducers driven by the transmitter generating the average acoustic power within the range from 1 microW to 6 mW. The streaming velocity was measured along the ultrasonic beam from 0 to 2 cm. Streaming was induced in a solution of water and corn starch. The experimental results showed that for a given acoustic power the streaming velocity was independent of the starch density in water, changed from 0.3 to 40 grams of starch in 1 l of distilled water. For applied acoustic powers, the streaming velocity changed linearly from 0.2 to 40 mm/s. Both, the theoretical solutions for plane and focused waves and the experimental results were in good agreement. The streaming velocity field was also visualised using the particle image velocimetry (PIV) and two different evaluation methods. The first based on the FFT-based cross-correlation analysis between small sections for each pair of images and the second employing the algorithm of searching for local displacements between several images.

  10. Mesh generation and computational modeling techniques for bioimpedance measurements: an example using the VHP data

    NASA Astrophysics Data System (ADS)

    Danilov, A. A.; Salamatova, V. Yu; Vassilevski, Yu V.

    2012-12-01

    Here, a workflow for high-resolution efficient numerical modeling of bioimpedance measurements is suggested that includes 3D image segmentation, adaptive mesh generation, finite-element discretization, and the analysis of simulation results. Using the adaptive unstructured tetrahedral meshes enables to decrease significantly a number of mesh elements while keeping model accuracy. The numerical results illustrate current, potential, and sensitivity field distributions for a conventional Kubicek-like scheme of bioimpedance measurements using segmented geometric model of human torso based on Visible Human Project data. The whole body VHP man computational mesh is constructed that contains 574 thousand vertices and 3.3 million tetrahedrons.

  11. Experimental and theoretical analysis on the procedure for estimating geo-stresses by the Kaiser effect

    NASA Astrophysics Data System (ADS)

    Li, Yuan-Hui; Yang, Yu-Jiang; Liu, Jian-Po; Zhao, Xing-Dong

    2010-10-01

    Acoustic emission tests of the core specimens retrieved from boreholes at the depth over 1000 m in Hongtoushan Copper Mine were carried out under uniaxial compressive loading, and the numerical test was also done by using the rock failure process analysis (RFPA2D) code, based on the procedure for estimating geo-stresses by the Kaiser effect under uniaxial compression. According to the statistical damage mechanics theory, the Kaiser effect mechanism was analyzed. Based on these analyses, it is indicted that the traditional method of estimating geo-stresses by the Kaiser effect is not appropriate, and the result is usually smaller than the real one. Furthermore, the greater confining compression in the rock mass may result in a larger difference between the Kaiser effect stresses acquired from uniaxial loading in laboratory and the real in-situ stresses.

  12. Bio-Impedance Characterization Technique with Implantable Neural Stimulator Using Biphasic Current Stimulus

    PubMed Central

    Lo, Yi-Kai; Chang, Chih-Wei; Liu, Wentai

    2016-01-01

    Knowledge of the bio-impedance and its equivalent circuit model at the electrode-electrolyte/tissue interface is important in the application of functional electrical stimulation. Impedance can be used as a merit to evaluate the proximity between electrodes and targeted tissues. Understanding the equivalent circuit parameters of the electrode can further be leveraged to set a safe boundary for stimulus parameters in order not to exceed the water window of electrodes. In this paper, we present an impedance characterization technique and implement a proof-of-concept system using an implantable neural stimulator and an off-the-shelf microcontroller. The proposed technique yields the parameters of the equivalent circuit of an electrode through large signal analysis by injecting a single low-intensity biphasic current stimulus with deliberately inserted inter-pulse delay and by acquiring the transient electrode voltage at three well-specified timings. Using low-intensity stimulus allows the derivation of electrode double layer capacitance since capacitive charge-injection dominates when electrode overpotential is small. Insertion of the inter-pulse delay creates a controlled discharge time to estimate the Faradic resistance. The proposed method has been validated by measuring the impedance of a) an emulated Randles cells made of discrete circuit components and b) a custom-made platinum electrode array in-vitro, and comparing estimated parameters with the results derived from an impedance analyzer. The proposed technique can be integrated into implantable or commercial neural stimulator system at low extra power consumption, low extra-hardware cost, and light computation. PMID:25569999

  13. Theoretical and experimental study of DOA estimation using AML algorithm for an isotropic and non-isotropic 3D array

    NASA Astrophysics Data System (ADS)

    Asgari, Shadnaz; Ali, Andreas M.; Collier, Travis C.; Yao, Yuan; Hudson, Ralph E.; Yao, Kung; Taylor, Charles E.

    2007-09-01

    The focus of most direction-of-arrival (DOA) estimation problems has been based mainly on a two-dimensional (2D) scenario where we only need to estimate the azimuth angle. But in various practical situations we have to deal with a three-dimensional scenario. The importance of being able to estimate both azimuth and elevation angles with high accuracy and low complexity is of interest. We present the theoretical and the practical issues of DOA estimation using the Approximate-Maximum-Likelihood (AML) algorithm in a 3D scenario. We show that the performance of the proposed 3D AML algorithm converges to the Cramer-Rao Bound. We use the concept of an isotropic array to reduce the complexity of the proposed algorithm by advocating a decoupled 3D version. We also explore a modified version of the decoupled 3D AML algorithm which can be used for DOA estimation with non-isotropic arrays. Various numerical results are presented. We use two acoustic arrays each consisting of 8 microphones to do some field measurements. The processing of the measured data from the acoustic arrays for different azimuth and elevation angles confirms the effectiveness of the proposed methods.

  14. Vehicle dynamic estimation with road bank angle consideration for rollover detection: theoretical and experimental studies

    NASA Astrophysics Data System (ADS)

    Dahmani, H.; Chadli, M.; Rabhi, A.; El Hajjaji, A.

    2013-12-01

    This article describes a method of vehicle dynamics estimation for impending rollover detection. This method is evaluated via a professional vehicle dynamics software and then through experimental results using a real test vehicle equipped with an inertial measurement unit. The vehicle dynamic states are estimated in the presence of the road bank angle (as a disturbance in the vehicle model) using a robust observer. The estimated roll angle and roll rate are used to compute the rollover index which is based on the prediction of the lateral load transfer. In order to anticipate the rollover detection, a new method is proposed in order to compute the time-to-rollover using the load transfer ratio. The used nonlinear model is deduced from the vehicle lateral dynamics and is represented by a Takagi-Sugeno (TS) fuzzy model. This representation is used in order to take into account the nonlinearities of lateral cornering forces. The proposed TS observer is designed with unmeasurable premise variables in order to consider the non-availability of the slip angles measurement. Simulation results show that the proposed observer and rollover detection method exhibit good efficiency.

  15. SU-E-J-188: Theoretical Estimation of Margin Necessary for Markerless Motion Tracking

    SciTech Connect

    Patel, R; Block, A; Harkenrider, M; Roeske, J

    2015-06-15

    Purpose: To estimate the margin necessary to adequately cover the target using markerless motion tracking (MMT) of lung lesions given the uncertainty in tracking and the size of the target. Methods: Simulations were developed in Matlab to determine the effect of tumor size and tracking uncertainty on the margin necessary to achieve adequate coverage of the target. For simplicity, the lung tumor was approximated by a circle on a 2D radiograph. The tumor was varied in size from a diameter of 0.1 − 30 mm in increments of 0.1 mm. From our previous studies using dual energy markerless motion tracking, we estimated tracking uncertainties in x and y to have a standard deviation of 2 mm. A Gaussian was used to simulate the deviation between the tracked location and true target location. For each size tumor, 100,000 deviations were randomly generated, the margin necessary to achieve at least 95% coverage 95% of the time was recorded. Additional simulations were run for varying uncertainties to demonstrate the effect of the tracking accuracy on the margin size. Results: The simulations showed an inverse relationship between tumor size and margin necessary to achieve 95% coverage 95% of the time using the MMT technique. The margin decreased exponentially with target size. An increase in tracking accuracy expectedly showed a decrease in margin size as well. Conclusion: In our clinic a 5 mm expansion of the internal target volume (ITV) is used to define the planning target volume (PTV). These simulations show that for tracking accuracies in x and y better than 2 mm, the margin required is less than 5 mm. This simple simulation can provide physicians with a guideline estimation for the margin necessary for use of MMT clinically based on the accuracy of their tracking and the size of the tumor.

  16. Theoretical, observational, and isotopic estimates of the lifetime of the solar nebula

    NASA Technical Reports Server (NTRS)

    Podosek, Frank A.; Cassen, Patrick

    1994-01-01

    There are a variety of isotopic data for meteorites which suggest that the protostellar nebula existed and was involved in making planetary materials for some 10(exp 7) yr or more. Many cosmochemists, however, advocate alternative interpretations of such data in order to comply with a perceived constraint, from theoretical considerations, that the nebula existed only for a much shorter time, usually stated as less than or equal to 10(exp 6) yr. In this paper, we review evidence relevant to solar nebula duration which is available through three different disciplines: theoretical modeling of star formation, isotopic data from meteorites, and astronomical observations of T Tauri stars. Theoretical models based on observations of present star-forming regions indicate that stars like the Sun form by dynamical gravitational collapse of dense cores of cold molcular clouds in the interstellar clouds in the interstellar medium. The collapse to a star and disk occurs rapidly, on a time scale of the order 10(exp 5) yr. Disks evolve by dissipating energy while redistributing angular momentum, but it is difficult to predict the rate of evolution, particularly for low mass (compared to the star) disks which nonetheless still contain enough material to account for the observed planetary system. There is no compelling evidence, from available theories of disk structure and evolution, that the solar nebula must have evolved rapidly and could not have persisted for more than 1 Ma. In considering chronoloically relevant isotopic data for meteorites, we focus on three methodologies: absolute ages by U-Pb/Pb-Pb, and relative ages by short-lived radionuclides (especially Al-26) and by evolution of Sr-87/Sr-86. Two kinds of meteoritic materials-refractory inclusions such as CAIs and differential meteorites (eucrites and augrites) -- appear to have experienced potentially dateable nebular events. In both cases, the most straightforward interpretations of the available data indicate

  17. Theoretical, observational, and isotopic estimates of the lifetime of the solar nebula

    NASA Technical Reports Server (NTRS)

    Podosek, Frank A.; Cassen, Patrick

    1994-01-01

    There are a variety of isotopic data for meteorites which suggest that the protostellar nebula existed and was involved in making planetary materials for some 10(exp 7) yr or more. Many cosmochemists, however, advocate alternative interpretations of such data in order to comply with a perceived constraint, from theoretical considerations, that the nebula existed only for a much shorter time, usually stated as less than or = 10(exp 6) yr. In this paper, we review evidence relevant to solar nebula duration which is available through three different disciplines: theoretical modelling of star formation, isotopic data from meteorites, and astronomical observations of T Tauri stars. Theoretical models based on observations of present star-forming regions indicate that stars like the Sun form by dynamical gravitational collapse of dense cores of cold molecular clouds in the interstellar medium. The collapse to a star and disk occurs rapidly on a time scale of the order 10(exp 5) yr. Disks evolve by dissipating energy while redistributing angular momentum, but it is difficult to predict the rate of evolution, particularly for low mass (compared to the star) disks which nonetheless still contain enough material to account for the observed planetary system. There is no compelling evidence, from available theories of disk structure and evolution, that the solar nebula must have evolved rapidly and could not have persisted for more than 1 Ma. In considering chronologically relevant isotopic data for meteorites, we focus on three methodologies: absolute ages by U-Pb/Pb-Pb, and relative ages by short-lived radionuclides (especially Al-26) and by evolution of Sr-87/Sr-86. Two kinds of meteoritic materials-refractory inclusions such as CAIs and differentiated meteorites (eucrites and angrites) - appear to have experience potentially dateable nebular events. In both case, the most straightforward interpretations of the available data indicate nebular events spanning several Ma. We

  18. A theoretical estimate of intrinsic ellipticity bispectra induced by angular momenta alignments

    NASA Astrophysics Data System (ADS)

    Merkel, Philipp M.; Schäfer, Björn Malte

    2014-12-01

    Intrinsically aligned galaxy shapes are one of the most important systematics in cosmic shear measurements. So far, theoretical studies of intrinsic alignments almost exclusively focus on their statistics at the two-point level. Results from numerical simulations, however, suggest that third-order measures might be even stronger affected. We therefore investigate the (angular) bispectrum of intrinsic alignments. In our fully analytical study, we describe intrinsic galaxy ellipticities by a physical alignment model, which makes use of tidal torque theory. We derive expressions for the various combinations of intrinsic and gravitationally induced ellipticities, i.e. III-, GII- and GGI-alignments, and compare our results to the shear bispectrum, the GGG-term. The latter is computed using hyperextended perturbation theory. Considering equilateral and squeezed configurations, we find that for a Euclid-like survey intrinsic alignments (III-alignments) start to dominate on angular scales smaller than 20 and 13 arcmin, respectively. This sensitivity to the configuration-space geometry may allow us to exploit the cosmological information contained in both the intrinsic and gravitationally induced ellipticity field. On smallest scales (ℓ ˜ 3000), III-alignments exceed the lensing signal by at least one order of magnitude. The amplitude of the GGI-alignments is the weakest. It stays below that of the shear field on all angular scales irrespective of the wavevector configuration.

  19. Estimating Young's modulus of zona pellucida by micropipette aspiration in combination with theoretical models of ovum.

    PubMed

    Khalilian, Morteza; Navidbakhsh, Mahdi; Valojerdi, Mojtaba Rezazadeh; Chizari, Mahmoud; Yazdi, Poopak Eftekhari

    2010-04-06

    The zona pellucida (ZP) is the spherical layer that surrounds the mammalian oocyte. The physical hardness of this layer plays a crucial role in fertilization and is largely unknown because of the lack of appropriate measuring and modelling methods. The aim of this study is to measure the biomechanical properties of the ZP of human/mouse ovum and to test the hypothesis that Young's modulus of the ZP varies with fertilization. Young's moduli of ZP are determined before and after fertilization by using the micropipette aspiration technique, coupled with theoretical models of the oocyte as an elastic incompressible half-space (half-space model), an elastic compressible bilayer (layered model) or an elastic compressible shell (shell model). Comparison of the models shows that incorporation of the layered geometry of the ovum and the compressibility of the ZP in the layered and shell models may provide a means of more accurately characterizing ZP elasticity. Evaluation of results shows that although the results of the models are different, all confirm that the hardening of ZP will increase following fertilization. As can be seen, different choices of models and experimental parameters can affect the interpretation of experimental data and lead to differing mechanical properties.

  20. Temperature mapping in bread dough using SE and GE two-point MRI methods: experimental and theoretical estimation of uncertainty.

    PubMed

    Lucas, Tiphaine; Musse, Maja; Bornert, Mélanie; Davenel, Armel; Quellec, Stéphane

    2012-04-01

    Two-dimensional (2D)-SE, 2D-GE and tri-dimensional (3D)-GE two-point T(1)-weighted MRI methods were evaluated in this study in order to maximize the accuracy of temperature mapping of bread dough during thermal processing. Uncertainties were propagated throughout each protocol of measurement, and comparisons demonstrated that all the methods with comparable acquisition times minimized the temperature uncertainty to similar extent. The experimental uncertainties obtained with low-field MRI were also compared to the theoretical estimations. Some discrepancies were reported between experimental and theoretical values of uncertainties of temperature; however, experimental and theoretical trends with varying parameters agreed to a large extent for both SE and GE methods. The 2D-SE method was chosen for further applications on prefermented dough because of its lower sensitivity to susceptibility differences in porous media. It was applied for temperature mapping in prefermented dough during chilling prior to freezing and compared locally to optical fiber measurements.

  1. Theoretical estimation and experimental studies on gas dissociation in TEA CO2 laser for long term arc free operation

    NASA Astrophysics Data System (ADS)

    Kumar, Manoj; Biswas, A. K.; Bhargav, Pankaj; Reghu, T.; Sahu, Shashikiran; Pakhare, J. S.; Bhagat, M. S.; Kukreja, L. M.

    2013-11-01

    Gas dissociation in a high energy, high repetition rate Transversely Excited Atmospheric (TEA) CO2 laser in both sealed-off and gas replenishment modes were studied for nitrogen lean gas mixture. A comprehensive theoretical model based on the Boltzmann transport equation and the discharge excitation circuit equations was adopted to calculate the amount of CO2 dissociated during a single discharge pulse. Theoretically it is shown that inclusion of superelastic collisions in the Boltzmann transport equation is necessary for precise estimation of dissociation per pulse, particularly at high discharge energy loadings and for nitrogen rich gas mixtures. Gas lifetime for repetitively pulsed operations was found experimentally by measuring the amount of CO formed when frequent arcing sets in under sealed off operation. Using this model, the optimum replenishment rate of CO2 either by gas purging and/or by catalytic regeneration needed for arc free long term operation of the laser was estimated. The measured saturation values of CO concentration in the laser chamber agreed well with the calculated values for various operating conditions. Arc free, long term repetitively pulsed operation of the laser was achieved in the gas replenishment mode with gas purging and/or catalytic regeneration.

  2. Theoretical estimate of the effect of thermal agitation on ribosome motion generated by stochastic microswimming.

    PubMed

    González-García, José S

    2016-11-04

    The effect of thermal agitation on ribosome motion is evaluated through the Péclet number, assuming that the ribosome is self-propelled along the mRNA during protein synthesis by a swimming stroke consisting of a cycle of stochastically-generated ribosome configurations involving its two subunits. The ribosome velocity probability distribution function is obtained, giving an approximately normal distribution. Its mean and variance together with an estimate of the in vivo free diffusion coefficient of the ribosome and using only configuration changes of small size, give a Péclet number similar to motor proteins and microorganisms. These results suggest the feasibility of the stochastic microswimming hypothesis to explain ribosome motion.

  3. Measurement of body fat using leg to leg bioimpedance

    PubMed Central

    Sung, R; Lau, P; Yu, C; Lam, P; Nelson, E

    2001-01-01

    AIMS—(1) To validate a leg to leg bioimpedance analysis (BIA) device in the measurement of body composition in children by assessment of its agreement with dual energy x ray absorptiometry (DXA) and its repeatability. (2) To establish a reference range of percentage body fat in Hong Kong Chinese children.
METHODS—Sequential BIA and DXA methods were used to determine body composition in 49 children aged 7-18 years; agreement between the two methods was calculated. Repeatability for the BIA method was established from duplicate measurements. Body composition was then determined by BIA in 1139 girls and 1243 boys aged 7-16 years, who were randomly sampled in eight local primary and secondary schools to establish reference ranges.
RESULTS—The 95% limits of agreement between BIA and DXA methods were considered acceptable (−3.3 kg to −0.5 kg fat mass and −3.9 to 0.6% body fat). The percentage body fat increased with increasing age. Compared to the 1993 Hong Kong growth survey, these children had higher body mass index. Mean (SD) percentage body fat at 7years of age was 17.2% (4.4%) and 14.0% (3.4%) respectively for boys and girls, which increased to 19.3% (4.8%) and 27.8% (6.3%) at age 16.
CONCLUSION—Leg to leg BIA is a valid alternative method to DXA for the measurement of body fat. Provisional reference ranges for percentage body fat for Hong Kong Chinese children aged 7-16 years are provided.

 PMID:11517118

  4. In vivo characterization of ischemic small intestine using bioimpedance measurements.

    PubMed

    Strand-Amundsen, R J; Tronstad, C; Kalvøy, H; Gundersen, Y; Krohn, C D; Aasen, A O; Holhjem, L; Reims, H M; Martinsen, Ø G; Høgetveit, J O; Ruud, T E; Tønnessen, T I

    2016-02-01

    The standard clinical method for the assessment of viability in ischemic small intestine is still visual inspection and palpation. This method is non-specific and unreliable, and requires a high level of clinical experience. Consequently, viable tissue might be removed, or irreversibly damaged tissue might be left in the body, which may both slow down patient recovery. Impedance spectroscopy has been used to measure changes in electrical parameters during ischemia in various tissues. The physical changes in the tissue at the cellular and structural levels after the onset of ischemia lead to time-variant changes in the electrical properties. We aimed to investigate the use of bioimpedance measurement to assess if the tissue is ischemic, and to assess the ischemic time duration. Measurements were performed on pigs (n = 7) using a novel two-electrode setup, with a Solartron 1260/1294 impedance gain-phase analyser. After induction of anaesthesia, an ischemic model with warm, full mesenteric arterial and venous occlusion on 30 cm of the jejunum was implemented. Electrodes were placed on the serosal surface of the ischemic jejunum, applying a constant voltage, and measuring the resulting electrical admittance. As a control, measurements were done on a fully perfused part of the jejunum in the same porcine model. The changes in tan δ (dielectric parameter), measured within a 6 h period of warm, full mesenteric occlusion ischemia in seven pigs, correlates with the onset and duration of ischemia. Tan δ measured in the ischemic part of the jejunum differed significantly from the control tissue, allowing us to determine if the tissue was ischemic or not (P < 0.0001, F = (1,75.13) 188.19). We also found that we could use tan δ to predict ischemic duration. This opens up the possibility of real-time monitoring and assessment of the presence and duration of small intestinal ischemia.

  5. Effects of Ventilation on Segmental Bioimpedance Spectroscopy Measures Using Generalizability Theory

    ERIC Educational Resources Information Center

    Turner, A. Allan; Lozano-Nieto, Albert; Bouffard, Marcel

    2010-01-01

    The purpose of this study was to examine the effect of three ventilation conditions (i.e., normal, regimented, and no-ventilation) on the reproducibility of bioimpedance scores in humans for the forearm and trunk segments. One hundred able-bodied North American men and women, from 18 to 71 years of age, volunteered as participants. The…

  6. On differences in radiosensitivity estimation: TCP experiments versus survival curves. A theoretical study.

    PubMed

    Stavrev, Pavel; Stavreva, Nadejda; Ruggieri, Ruggero; Nahum, Alan

    2015-08-07

    We have compared two methods of estimating the cellular radiosensitivity of a heterogeneous tumour, namely, via cell-survival and via tumour control probability (TCP) pseudo-experiments. It is assumed that there exists intra-tumour variability in radiosensitivity and that the tumour consists predominantly of radiosensitive cells and a small number of radio-resistant cells.Using a multi-component, linear-quadratic (LQ) model of cell kill, a pseudo-experimental cell-survival versus dose curve is derived. This curve is then fitted with a mono-component LQ model describing the response of a homogeneous cell population. For the assumed variation in radiosensitivity it is shown that the composite pseudo-experimental survival curve is well approximated by the survival curve of cells with uniform radiosensitivity.For the same initial cell radiosensitivity distribution several pseudo-experimental TCP curves are simulated corresponding to different fractionation regimes. The TCP model used accounts for clonogen proliferation during a fractionated treatment. The set of simulated TCP curves is then fitted with a mono-component TCP model. As in the cell survival experiment the fit with a mono-component model assuming uniform radiosensitivity is shown to be highly acceptable.However, the best-fit values of cellular radiosensitivity produced via the two methods are very different. The cell-survival pseudo-experiment yields a high radiosensitivity value, while the TCP pseudo-experiment shows that the dose-response is dominated by the most resistant sub-population in the tumour, even when this is just a small fraction of the total.

  7. On differences in radiosensitivity estimation: TCP experiments versus survival curves. A theoretical study

    NASA Astrophysics Data System (ADS)

    Stavrev, Pavel; Stavreva, Nadejda; Ruggieri, Ruggero; Nahum, Alan

    2015-08-01

    We have compared two methods of estimating the cellular radiosensitivity of a heterogeneous tumour, namely, via cell-survival and via tumour control probability (TCP) pseudo-experiments. It is assumed that there exists intra-tumour variability in radiosensitivity and that the tumour consists predominantly of radiosensitive cells and a small number of radio-resistant cells. Using a multi-component, linear-quadratic (LQ) model of cell kill, a pseudo-experimental cell-survival versus dose curve is derived. This curve is then fitted with a mono-component LQ model describing the response of a homogeneous cell population. For the assumed variation in radiosensitivity it is shown that the composite pseudo-experimental survival curve is well approximated by the survival curve of cells with uniform radiosensitivity. For the same initial cell radiosensitivity distribution several pseudo-experimental TCP curves are simulated corresponding to different fractionation regimes. The TCP model used accounts for clonogen proliferation during a fractionated treatment. The set of simulated TCP curves is then fitted with a mono-component TCP model. As in the cell survival experiment the fit with a mono-component model assuming uniform radiosensitivity is shown to be highly acceptable. However, the best-fit values of cellular radiosensitivity produced via the two methods are very different. The cell-survival pseudo-experiment yields a high radiosensitivity value, while the TCP pseudo-experiment shows that the dose-response is dominated by the most resistant sub-population in the tumour, even when this is just a small fraction of the total.

  8. Experimental and Theoretical Estimation of Excited Species Generation in Pulsed Electron Beam-Generated Plasmas Produced in Pure Argon, Nitrogen, Oxygen, and Their Mixtures

    DTIC Science & Technology

    2011-05-13

    Naval Research Laboratory Washington, DC 20375-5320 NRL/MR/6750--11-9333 Experimental and Theoretical Estimation of Excited Species Generation in ...Pulsed Electron Beam–Generated Plasmas Produced in Pure Argon, Nitrogen, Oxygen, and Their Mixtures May 13, 2011 Approved for public release...PAGE 18. NUMBER OF PAGES 17. LIMITATION OF ABSTRACT Experimental and Theoretical Estimation of Excited Species Generation in Pulsed Electron Beam

  9. Theoretical model for diffusive greenhouse gas fluxes estimation across water-air interfaces measured with the static floating chamber method

    NASA Astrophysics Data System (ADS)

    Xiao, Shangbin; Wang, Chenghao; Wilkinson, Richard Jeremy; Liu, Defu; Zhang, Cheng; Xu, Wennian; Yang, Zhengjian; Wang, Yuchun; Lei, Dan

    2016-07-01

    Aquatic systems are sources of greenhouse gases on different scales, however the uncertainty of gas fluxes estimated using popular methods are not well defined. Here we show that greenhouse gas fluxes across the air-water interface of seas and inland waters are significantly underestimated by the currently used static floating chamber (SFC) method. We found that the SFC CH4 flux calculated with the popular linear regression (LR) on changes of gas concentration over time only accounts for 54.75% and 35.77% of the corresponding real gas flux when the monitoring periods are 30 and 60 min respectively based on the theoretical model and experimental measurements. Our results do manifest that nonlinear regression models can improve gas flux estimations, while the exponential regression (ER) model can give the best estimations which are close to true values when compared to LR. However, the quadratic regression model is proved to be inappropriate for long time measurements and those aquatic systems with high gas emission rate. The greenhouse gases effluxes emitted from aquatic systems may be much more than those reported previously, and models on future scenarios of global climate changes should be adjusted accordingly.

  10. Theoretical Model and Experimental Validation of the estimated proportions of common and independent input to motor neurons.

    PubMed

    Castronovo, A Margherita; Negro, Francesco; Farina, Dario

    2015-01-01

    Motor neurons in the spinal cord receive synaptic input that comprises common and independent components. The part of synaptic input that is common to all motor neurons is the one regulating the production of force. Therefore, its quantification is important to assess the strategy used by Central Nervous System (CNS) to control and regulate movements, especially in physiological conditions such as fatigue. In this study we present and validate a method to estimate the ratio between strengths of common and independent inputs to motor neurons and we apply this method to investigate its changes during fatigue. By means of coherence analysis we estimated the level of correlation between motor unit spike trains at the beginning and at the end of fatiguing contractions of the Tibialis Anterior muscle at three different force targets. Combining theoretical modeling and experimental data we estimated the strength of the common synaptic input with respect to the independent one. We observed a consistent increase in the proportion of the shared input to motor neurons during fatigue. This may be interpreted as a strategy used by the CNS to counteract the occurrence of fatigue and the concurrent decrease of generated force.

  11. A Thorax Simulator for Complex Dynamic Bioimpedance Measurements With Textile Electrodes.

    PubMed

    Ulbrich, Mark; Muhlsteff, Jens; Teichmann, Daniel; Leonhardt, Steffen; Walter, Marian

    2015-06-01

    Bioimpedance measurements on the human thorax are suitable for assessment of body composition or hemodynamic parameters, such as stroke volume; they are non-invasive, easy in application and inexpensive. When targeting personal healthcare scenarios, the technology can be integrated into textiles to increase ease, comfort and coverage of measurements. Bioimpedance is generally measured using two electrodes injecting low alternating currents (0.5-10 mA) and two additional electrodes to measure the corresponding voltage drop. The impedance is measured either spectroscopically (bioimpedance spectroscopy, BIS) between 5 kHz and 1 MHz or continuously at a fixed frequency around 100 kHz (impedance cardiography, ICG). A thorax simulator is being developed for testing and calibration of bioimpedance devices and other new developments. For the first time, it is possible to mimic the complete time-variant properties of the thorax during an impedance measurement. This includes the dynamic real part and dynamic imaginary part of the impedance with a peak-to-peak value of 0.2 Ω and an adjustable base impedance (24.6 Ω ≥ Z0 ≥ 51.6 Ω). Another novelty is adjustable complex electrode-skin contact impedances for up to 8 electrodes to evaluate bioimpedance devices in combination with textile electrodes. In addition, an electrocardiographic signal is provided for cardiographic measurements which is used in ICG devices. This provides the possibility to generate physiologic impedance changes, and in combination with an ECG, all parameters of interest such as stroke volume (SV), pre-ejection period (PEP) or extracellular resistance (Re) can be simulated. The speed of all dynamic signals can be altered. The simulator was successfully tested with commercially available BIS and ICG devices and the preset signals are measured with high correlation (r = 0.996).

  12. [Whole body versus segmental bioimpedance measurements (BIS) of electrical resistance (Re) and extracellular volume (ECV) for assessment of dry weight in end-stage renal patients treated by hemodialysis].

    PubMed

    Załuska, Wojciech; Małecka, Teresa; Mozul, Sławomir; Ksiazek, Andrzej

    2004-01-01

    The precise estimation of the hydration status of the human body has a great meaning in the assessment of dry weight in end-stage renal disease patients treated by hemodialysis. The bioimpedance technique (BIS) is postulated as easy in use and as a non-invasive method in monitoring the size of hydrate space such as total body water (TBW) and extracellular volume (ECV). However, the precision of the method (Whole Body Bioimpedance Technique) has been questioned in several research papers. One of the problems lies in fluid transfer from peripheral spaces (limbs) to the central space (trunk) while changing the position of the body (orthostatic effect). This phenomena can be eliminated using segmental bioimpedance technique (4200 Hydra, Analyzer, Xitron, San Diego, CA, U.S.A.). The purpose of the study was to estimate the changes of electrical resistance (Re) the extracellular volume (ECV) at the time -pre, and -post 10 hemodialysis sessions using whole body bioimpedance technique (WBIS) in comparison to BIS measurements in specific segments of the body; arm (ECVarm), leg (ECVleg), trunk (ECVtrunk). The sum of changes in extracellular volume (ECV) in segments (2ECVarm+ ECVtrunk + 2ECVleg) was 13.26 +/- 1.861 L in comparison to 17.29 +/- 2.07 L (p < 0.01) as measured by WBIS technique at the time before HD. The changes in electrical resistance Re was of 558 +/- 68 W as calculated from the sum of segments versus 560 +/- 70 W (p < 0.05) as measured by WBIS. At the time after hemodialysis the sum of segmental ECV volume measurement was of 11.42 +/- 1.28 L in comparison to 14.84 +/- 1.31 (p < 0.001) from WBIS the whole body technique (WBIS) and changes in electrical resistance Re was of 674 +/- 67 W as calculated from the sum of segments versus 677 +/- 64 (p < 0.05) W respectively. The observed difference between the identical electrical resistance Re as measured by WBIS in comparison to the sum of segment measurements and important difference between ECV volume as measured

  13. A theoretical model to estimate the oil burial depth on sandy beaches: A new oil spill management tool.

    PubMed

    Bernabeu, Ana M; Fernández-Fernández, Sandra; Rey, Daniel

    2016-08-15

    In oiled sandy beaches, unrecovered fuel can be buried up to several metres. This study proposes a theoretical approach to oil burial estimation along the intertidal area. First, our results revealed the existence of two main patterns in seasonal beach profile behaviour. Type A is characterized by intertidal slopes of time-constant steepness which advance/recede parallel to themselves in response to changing wave conditions. Type B is characterized by slopes of time-varying steepness which intersect at a given point in the intertidal area. This finding has a direct influence on the definition of oil depth. Type A pattern exhibits oil burial along the entire intertidal area following decreasing wave energy, while the type B pattern combines burial in high intertidal and exhumation in mid and/or low intertidal zones, depending on the position of the intersection point. These outcomes should be incorporated as key tools in future oil spill management programs.

  14. A game-theoretic framework for estimating a health purchaser's willingness-to-pay for health and for expansion.

    PubMed

    Yaesoubi, Reza; Roberts, Stephen D

    2010-12-01

    A health purchaser's willingness-to-pay (WTP) for health is defined as the amount of money the health purchaser (e.g. a health maximizing public agency or a profit maximizing health insurer) is willing to spend for an additional unit of health. In this paper, we propose a game-theoretic framework for estimating a health purchaser's WTP for health in markets where the health purchaser offers a menu of medical interventions, and each individual in the population selects the intervention that maximizes her prospect. We discuss how the WTP for health can be employed to determine medical guidelines, and to price new medical technologies, such that the health purchaser is willing to implement them. The framework further introduces a measure for WTP for expansion, defined as the amount of money the health purchaser is willing to pay per person in the population served by the health provider to increase the consumption level of the intervention by one percent without changing the intervention price. This measure can be employed to find how much to invest in expanding a medical program through opening new facilities, advertising, etc. Applying the proposed framework to colorectal cancer screening tests, we estimate the WTP for health and the WTP for expansion of colorectal cancer screening tests for the 2005 US population.

  15. Body Fat Equations and Electrical Bioimpedance Values in Prediction of Cardiovascular Risk Factors in Eutrophic and Overweight Adolescents

    PubMed Central

    Faria, Franciane Rocha; Faria, Eliane Rodrigues; Cecon, Roberta Stofeles; Barbosa Júnior, Djalma Adão; Franceschini, Sylvia do Carmo Castro; Peluzio, Maria do Carmo Gouveia; Ribeiro, Andréia Queiroz; Lira, Pedro Israel Cabral; Cecon, Paulo Roberto; Priore, Silvia Eloiza

    2013-01-01

    The aim of this study was to analyze body fat anthropometric equations and electrical bioimpedance analysis (BIA) in the prediction of cardiovascular risk factors in eutrophic and overweight adolescents. 210 adolescents were divided into eutrophic group (G1) and overweight group (G2). The percentage of body fat (% BF) was estimated using 10 body fat anthropometric equations and 2 BIA. We measured lipid profiles, uric acid, insulin, fasting glucose, homeostasis model assessment-insulin resistance (HOMA-IR), and blood pressure. We found that 76.7% of the adolescents exhibited inadequacy of at least one biochemical parameter or clinical cardiovascular risk. Higher values of triglycerides (TG) (P = 0.001), insulin, and HOMA-IR (P < 0.001) were observed in the G2 adolescents. In multivariate linear regression analysis, the % BF from equation (5) was associated with TG, diastolic blood pressure, and insulin in G1. Among the G2 adolescents, the % BF estimated by (5) and (9) was associated with LDL, TG, insulin, and the HOMA-IR. Body fat anthropometric equations were associated with cardiovascular risk factors and should be used to assess the nutritional status of adolescents. In this study, equation (5) was associated with a higher number of cardiovascular risk factors independent of the nutritional status of adolescents. PMID:23762051

  16. Mass balance approaches for estimating the intestinal absorption and metabolism of peptides and analogues: theoretical development and applications

    NASA Technical Reports Server (NTRS)

    Sinko, P. J.; Leesman, G. D.; Amidon, G. L.

    1993-01-01

    A theoretical analysis for estimating the extent of intestinal peptide and peptide analogue absorption was developed on the basis of a mass balance approach that incorporates convection, permeability, and reaction. The macroscopic mass balance analysis (MMBA) was extended to include chemical and enzymatic degradation. A microscopic mass balance analysis, a numerical approach, was also developed and the results compared to the MMBA. The mass balance equations for the fraction of a drug absorbed and reacted in the tube were derived from the general steady state mass balance in a tube: [formula: see text] where M is mass, z is the length of the tube, R is the tube radius, Pw is the intestinal wall permeability, kr is the reaction rate constant, C is the concentration of drug in the volume element over which the mass balance is taken, VL is the volume of the tube, and vz is the axial velocity of drug. The theory was first applied to the oral absorption of two tripeptide analogues, cefaclor (CCL) and cefatrizine (CZN), which degrade and dimerize in the intestine. Simulations using the mass balance equations, the experimental absorption parameters, and the literature stability rate constants yielded a mean estimated extent of CCL (250-mg dose) and CZN (1000-mg dose) absorption of 89 and 51%, respectively, which was similar to the mean extent of absorption reported in humans (90 and 50%). It was proposed previously that 15% of the CCL dose spontaneously degraded systematically; however, our simulations suggest that significant CCL degradation occurs (8 to 17%) presystemically in the intestinal lumen.(ABSTRACT TRUNCATED AT 250 WORDS).

  17. THE DETECTION RATE OF EARLY UV EMISSION FROM SUPERNOVAE: A DEDICATED GALEX/PTF SURVEY AND CALIBRATED THEORETICAL ESTIMATES

    SciTech Connect

    Ganot, Noam; Gal-Yam, Avishay; Ofek, Eran O.; Sagiv, Ilan; Waxman, Eli; Lapid, Ofer; Kulkarni, Shrinivas R.; Kasliwal, Mansi M.; Ben-Ami, Sagi; Chelouche, Doron; Rafter, Stephen; Behar, Ehud; Laor, Ari; Poznanski, Dovi; Nakar, Ehud; Maoz, Dan; Trakhtenbrot, Benny; Neill, James D.; Barlow, Thomas A.; Martin, Christofer D.; Collaboration: ULTRASAT Science Team; WTTH consortium; GALEX Science Team; Palomar Transient Factory; and others

    2016-03-20

    The radius and surface composition of an exploding massive star, as well as the explosion energy per unit mass, can be measured using early UV observations of core-collapse supernovae (SNe). We present the first results from a simultaneous GALEX/PTF search for early ultraviolet (UV) emission from SNe. Six SNe II and one Type II superluminous SN (SLSN-II) are clearly detected in the GALEX near-UV (NUV) data. We compare our detection rate with theoretical estimates based on early, shock-cooling UV light curves calculated from models that fit existing Swift and GALEX observations well, combined with volumetric SN rates. We find that our observations are in good agreement with calculated rates assuming that red supergiants (RSGs) explode with fiducial radii of 500 R{sub ⊙}, explosion energies of 10{sup 51} erg, and ejecta masses of 10 M{sub ⊙}. Exploding blue supergiants and Wolf–Rayet stars are poorly constrained. We describe how such observations can be used to derive the progenitor radius, surface composition, and explosion energy per unit mass of such SN events, and we demonstrate why UV observations are critical for such measurements. We use the fiducial RSG parameters to estimate the detection rate of SNe during the shock-cooling phase (<1 day after explosion) for several ground-based surveys (PTF, ZTF, and LSST). We show that the proposed wide-field UV explorer ULTRASAT mission is expected to find >85 SNe per year (∼0.5 SN per deg{sup 2}), independent of host galaxy extinction, down to an NUV detection limit of 21.5 mag AB. Our pilot GALEX/PTF project thus convincingly demonstrates that a dedicated, systematic SN survey at the NUV band is a compelling method to study how massive stars end their life.

  18. The Detection Rate of Early UV Emission from Supernovae: A Dedicated Galex/PTF Survey and Calibrated Theoretical Estimates

    NASA Astrophysics Data System (ADS)

    Ganot, Noam; Gal-Yam, Avishay; Ofek, Eran. O.; Sagiv, Ilan; Waxman, Eli; Lapid, Ofer; Kulkarni, Shrinivas R.; Ben-Ami, Sagi; Kasliwal, Mansi M.; The ULTRASAT Science Team; Chelouche, Doron; Rafter, Stephen; Behar, Ehud; Laor, Ari; Poznanski, Dovi; Nakar, Ehud; Maoz, Dan; Trakhtenbrot, Benny; WTTH Consortium, The; Neill, James D.; Barlow, Thomas A.; Martin, Christofer D.; Gezari, Suvi; the GALEX Science Team; Arcavi, Iair; Bloom, Joshua S.; Nugent, Peter E.; Sullivan, Mark; Palomar Transient Factory, The

    2016-03-01

    The radius and surface composition of an exploding massive star, as well as the explosion energy per unit mass, can be measured using early UV observations of core-collapse supernovae (SNe). We present the first results from a simultaneous GALEX/PTF search for early ultraviolet (UV) emission from SNe. Six SNe II and one Type II superluminous SN (SLSN-II) are clearly detected in the GALEX near-UV (NUV) data. We compare our detection rate with theoretical estimates based on early, shock-cooling UV light curves calculated from models that fit existing Swift and GALEX observations well, combined with volumetric SN rates. We find that our observations are in good agreement with calculated rates assuming that red supergiants (RSGs) explode with fiducial radii of 500 R ⊙, explosion energies of 1051 erg, and ejecta masses of 10 M ⊙. Exploding blue supergiants and Wolf-Rayet stars are poorly constrained. We describe how such observations can be used to derive the progenitor radius, surface composition, and explosion energy per unit mass of such SN events, and we demonstrate why UV observations are critical for such measurements. We use the fiducial RSG parameters to estimate the detection rate of SNe during the shock-cooling phase (<1 day after explosion) for several ground-based surveys (PTF, ZTF, and LSST). We show that the proposed wide-field UV explorer ULTRASAT mission is expected to find >85 SNe per year (˜0.5 SN per deg2), independent of host galaxy extinction, down to an NUV detection limit of 21.5 mag AB. Our pilot GALEX/PTF project thus convincingly demonstrates that a dedicated, systematic SN survey at the NUV band is a compelling method to study how massive stars end their life.

  19. [Bioimpedance means of skin condition monitoring during therapeutic and cosmetic procedures].

    PubMed

    Alekseenko, V A; Kus'min, A A; Filist, S A

    2008-01-01

    Engineering and technological problems of bioimpedance skin surface mapping are considered. A typical design of a device based on a PIC 16F microcontroller is suggested. It includes a keyboard, LCD indicator, probing current generator with programmed frequency tuning, and units for probing current monitoring and bioimpedance measurement. The electrode matrix of the device is constructed using nanotechnology. A microcontroller-controlled multiplexor provides scanning of interelectrode impedance, which makes it possible to obtain the impedance image of the skin surface under the electrode matrix. The microcontroller controls the probing signal generator frequency and allows layer-by-layer images of skin under the electrode matrix to be obtained. This makes it possible to use reconstruction tomography methods for analysis and monitoring of the skin condition during therapeutic and cosmetic procedures.

  20. Time dependence of electrical bioimpedance on porcine liver and kidney under a 50 Hz ac current

    NASA Astrophysics Data System (ADS)

    Spottorno, J.; Multigner, M.; Rivero, G.; Álvarez, L.; de la Venta, J.; Santos, M.

    2008-03-01

    The purpose of this work is to study the changes of the bioimpedance from its 'in vivo' value to the values measured in a few hours after the excision from the body. The evolution of electrical impedance with time after surgical extraction has been studied on two porcine organs: the liver and the kidney. Both in vivo and ex vivo measurements of electrical impedance, measuring its real and imaginary components, have been performed. The in vivo measurements have been carried out with the animal anaesthetized. The ex vivo measurements have been made more than 2 h after the extraction of the organ. The latter experiment has been carried out at two different stabilized temperatures: at normal body temperature and at the standard preservation temperature for transplant surgery. The measurements show a correlation between the biological evolution and the electrical bioimpedance of the organs, which increases from its in vivo value immediately after excision, multiplying its value by 2 in a few hours.

  1. Time dependence of electrical bioimpedance on porcine liver and kidney under a 50 Hz ac current.

    PubMed

    Spottorno, J; Multigner, M; Rivero, G; Alvarez, L; de la Venta, J; Santos, M

    2008-03-21

    The purpose of this work is to study the changes of the bioimpedance from its 'in vivo' value to the values measured in a few hours after the excision from the body. The evolution of electrical impedance with time after surgical extraction has been studied on two porcine organs: the liver and the kidney. Both in vivo and ex vivo measurements of electrical impedance, measuring its real and imaginary components, have been performed. The in vivo measurements have been carried out with the animal anaesthetized. The ex vivo measurements have been made more than 2 h after the extraction of the organ. The latter experiment has been carried out at two different stabilized temperatures: at normal body temperature and at the standard preservation temperature for transplant surgery. The measurements show a correlation between the biological evolution and the electrical bioimpedance of the organs, which increases from its in vivo value immediately after excision, multiplying its value by 2 in a few hours.

  2. Studying the Performance of Conductive Polymer Films as Textile Electrodes for Electrical Bioimpedance Measurements

    NASA Astrophysics Data System (ADS)

    Cunico, F. J.; Marquez, J. C.; Hilke, H.; Skrifvars, M.; Seoane, F.

    2013-04-01

    With the goal of finding novel biocompatible materials suitable to replace silver in the manufacturing of textile electrodes for medical applications of electrical bioimpedance spectroscopy, three different polymeric materials have been investigated. Films have been prepared from different polymeric materials and custom bracelets have been confectioned with them. Tetrapolar total right side electrical bioimpedance spectroscopy (EBIS) measurements have been performed with polymer and with standard gel electrodes. The performance of the polymer films was compared against the performance of the gel electrodes. The results indicated that only the polypropylene 1380 could produce EBIS measurements but remarkably tainted with high frequency artefacts. The influence of the electrode mismatch, stray capacitances and large electrode polarization impedance are unclear and they need to be clarified with further studies. If sensorized garments could be made with such biocompatible polymeric materials the burden of considering textrodes class III devices could be avoided.

  3. Bioimpedence to Assess Breast Density as a Risk Factor for Breast Cancer in Adult Women and Adolescent Girls.

    PubMed

    Maskarinec, Gertraud; Morimoto, Yukiko; Laguana, Michelle B; Novotny, Rachel; Leon Guerrero, Rachael T

    2016-01-01

    Although high mammographic density is one of the strongest predictors of breast cancer risk, X-ray based mammography cannot be performed before the recommended screening age, especially not in adolescents and young women. Therefore, new techniques for breast density measurement are of interest. In this pilot study in Guam and Hawaii, we evaluated a radiation-free, bioimpedance device called Electrical Breast DensitometerTM (EBD; senoSENSE Medical Systems, Inc., Ontario, Canada) for measuring breast density in 95 women aged 31-82 years and 41 girls aged 8-18 years. Percent density (PD) was estimated in the women's most recent mammogram using a computer-assisted method. Correlation coefficients and linear regression were applied for statistical analysis. In adult women, mean EBD and PD values of the left and right breasts were 230±52 and 226±50 Ω and 23.7±15.1 and 24.2±15.2%, respectively. The EBD measurements were inversely correlated with PD (rSpearman=-0.52, p<0.0001); the correlation was stronger in Caucasians (rSpearman=-0.70, p<0.0001) than Asians (rSpearman=-0.54, p<0.01) and Native Hawaiian/Chamorro/Pacific Islanders (rSpearman=-0.34, p=0.06). Using 4 categories of PD (<10, 10-25, 26-50, 51-75%), the respective mean EBD values were 256±32, 249±41, 202±46, and 178±43 Ω (p<0.0001). In girls, the mean EBD values in the left and right breast were 148±40 and 155±54 Ω; EBD values decreased from Tanner stages 1 to 4 (204±14, 154±79, 136±43, and 119±16 Ω for stages 1-4, respectively) but were higher at Tanner stage 5 (165±30 Ω). With further development, this bioimpedance method may allow for investigations of breast development among adolescent, as well as assessment of breast cancer risk early in life and in populations without access to mammography.

  4. Bioimpedence to Assess Breast Density as a Risk Factor for Breast Cancer in Adult Women and Adolescent Girls

    PubMed Central

    Maskarinec, Gertraud; Morimoto, Yukiko; Laguaña, Michelle B; Novotny, Rachel; Guerrero, Rachael T Leon

    2016-01-01

    Although high mammographic density is one of the strongest predictors of breast cancer risk, X-ray based mammography cannot be performed before the recommended screening age, especially not in adolescents and young women. Therefore, new techniques for breast density measurement are of interest. In this pilot study in Guam and Hawaii, we evaluated a radiation-free, bioimpedance device called Electrical Breast Densitometer™ (EBD; senoSENSE Medical Systems, Inc., Ontario, Canada) for measuring breast density in 95 women aged 31–82 years and 41 girls aged 8–18 years. Percent density (PD) was estimated in the women’s most recent mammogram using a computer-assisted method. Correlation coefficients and linear regression were applied for statistical analysis. In adult women, mean EBD and PD values of the left and right breasts were 230±52 and 226±50 Ω and 23.7±15.1 and 24.2±15.2%, respectively. The EBD measurements were inversely correlated with PD (rSpearman=−0.52, p<0.0001); the correlation was stronger in Caucasians (rSpearman=−0.70, p<0.0001) than Asians (rSpearman=−0.54, p<0.01) and Native Hawaiian/Chamorro/Pacific Islanders (rSpearman=−0.34, p=0.06). Using 4 categories of PD (<10, 10–25, 26–50, 51–75%), the respective mean EBD values were 256±32, 249±41, 202±46, and 178±43 Ω (p<0.0001). In girls, the mean EBD values in the left and right breast were 148±40 and 155±54 Ω; EBD values decreased from Tanner stages 1 to 4 (204±14, 154±79, 136±43, and 119±16 Ω for stages 1–4, respectively) but were higher at Tanner stage 5 (165±30 Ω). With further development, this bioimpedance method may allow for investigations of breast development among adolescent, as well as assessment of breast cancer risk early in life and in populations without access to mammography. PMID:26838256

  5. Landfill mining: Development of a theoretical method for a preliminary estimate of the raw material potential of landfill sites.

    PubMed

    Wolfsberger, Tanja; Nispel, Jörg; Sarc, Renato; Aldrian, Alexia; Hermann, Robert; Höllen, Daniel; Pomberger, Roland; Budischowsky, Andreas; Ragossnig, Arne

    2015-07-01

    In recent years, the rising need for raw materials by emerging economies (e.g. China) has led to a change in the availability of certain primary raw materials, such as ores or coal. The accompanying rising demand for secondary raw materials as possible substitutes for primary resources, the soaring prices and the global lack of specific (e.g. metallic) raw materials pique the interest of science and economy to consider landfills as possible secondary sources of raw materials. These sites often contain substantial amounts of materials that can be potentially utilised materially or energetically. To investigate the raw material potential of a landfill, boreholes and excavations, as well as subsequent hand sorting have proven quite successful. These procedures, however, are expensive and time consuming as they frequently require extensive construction measures on the landfill body or waste mass. For this reason, this article introduces a newly developed, affordable, theoretical method for the estimation of landfill contents. The article summarises the individual calculation steps of the method and demonstrates this using the example of a selected Austrian sanitary landfill. To assess the practicality and plausibility, the mathematically determined raw material potential is compared with the actual results from experimental studies of excavated waste from the same landfill (actual raw material potential).

  6. Congestive heart failure patient monitoring using wearable Bio-impedance sensor technology.

    PubMed

    Seulki Lee; Squillace, Gabriel; Smeets, Christophe; Vandecasteele, Marianne; Grieten, Lars; de Francisco, Ruben; Van Hoof, Chris

    2015-08-01

    A new technique to monitor the fluid status of congestive heart failure (CHF) patients in the hospital is proposed and verified in a clinical trial with 8 patients. A wearable Bio-impedance (BioZ) sensor allows a continuous localized measurement which can be complement clinical tools in the hospital. Thanks to the multi-parametric approach and correlation analysis with clinical reference, BioZ is successfully shown as a promising parameter for continuous and wearable CHF patient monitoring application.

  7. A new model for the determination of fluid status and body composition from bioimpedance measurements.

    PubMed

    Kraemer, M

    2006-09-01

    In patients with end stage renal failure, control of the fluid status of the body is lost and fluid accumulates continuously. By dialysis therapy, excess fluid can be removed, but there are no reliable methods to establish the amount of excess fluid to be removed. Severe and even lethal complications may be the consequence of longer term deviations from a normal fluid status in dialysis patients, but also in other patient groups. Therefore, a large medical need exists for a precise and pragmatic method to determine fluid status. Bioimpedance measurement, today mainly used for nutrition status assessment, is regarded as an interesting candidate method for fluid status determination. This paper presents a four-compartment model of the human body, developed to derive information on fluid status from extra- and intracellular volumes measured by bioimpedance spectroscopy. The model allows us to determine weights of each of four compartments (overhydration, fat, muscle and remaining 'basic' components) by analyzing extra- and intracellular water volumes in different tissues of the body. Thereby fluid status (overhydration volume, normohydrated weight of the patient) as well as nutrition and fitness status (lean body, fat and muscle mass) can be determined quantitatively from a single measurement. A preliminary evaluation of the performance of a system consisting of a bioimpedance spectrum analyzer and the four-compartment model is also provided.

  8. Performance evaluation of wideband bio-impedance spectroscopy using constant voltage source and constant current source

    NASA Astrophysics Data System (ADS)

    Mohamadou, Youssoufa; In Oh, Tong; Wi, Hun; Sohal, Harsh; Farooq, Adnan; Woo, Eung Je; McEwan, Alistair Lee

    2012-10-01

    Current sources are widely used in bio-impedance spectroscopy (BIS) measurement systems to maximize current injection for increased signal to noise while keeping within medical safety specifications. High-performance current sources based on the Howland current pump with optimized impedance converters are able to minimize stray capacitance of the cables and setup. This approach is limited at high frequencies primarily due to the deteriorated output impedance of the constant current source when situated in a real measurement system. For this reason, voltage sources have been suggested, but they require a current sensing resistor, and the SNR reduces at low impedance loads due to the lower current required to maintain constant voltage. In this paper, we compare the performance of a current source-based BIS and a voltage source-based BIS, which use common components. The current source BIS is based on a Howland current pump and generalized impedance converters to maintain a high output impedance of more than 1 MΩ at 2 MHz. The voltage source BIS is based on voltage division between an internal current sensing resistor (Rs) and an external sample. To maintain high SNR, Rs is varied so that the source voltage is divided more or less equally. In order to calibrate the systems, we measured the transfer function of the BIS systems with several known resistor and capacitor loads. From this we may estimate the resistance and capacitance of biological tissues using the least-squares method to minimize error between the measured transimpedance excluding the system transfer function and that from an impedance model. When tested on realistic loads including discrete resistors and capacitors, and saline and agar phantoms, the voltage source-based BIS system had a wider bandwidth of 10 Hz to 2.2 MHz with less than 1% deviation from the expected spectra compared to more than 10% with the current source. The voltage source also showed an SNR of at least 60 dB up to 2.2 MHz in

  9. Volume estimation of low-contrast lesions with CT: a comparison of performances from a phantom study, simulations and theoretical analysis

    NASA Astrophysics Data System (ADS)

    Li, Qin; Gavrielides, Marios A.; Zeng, Rongping; Myers, Kyle J.; Sahiner, Berkman; Petrick, Nicholas

    2015-01-01

    Measurements of lung nodule volume with multi-detector computed tomography (MDCT) have been shown to be more accurate and precise compared to conventional lower dimensional measurements. Quantifying the size of lesions is potentially more difficult when the object-to-background contrast is low as with lesions in the liver. Physical phantom and simulation studies are often utilized to analyze the bias and variance of lesion size estimates because a ground truth or reference standard can be established. In addition, it may also be useful to derive theoretical bounds as another way of characterizing lesion sizing methods. The goal of this work was to study the performance of a MDCT system for a lesion volume estimation task with object-to-background contrast less than 50 HU, and to understand the relation among performances obtained from phantom study, simulation and theoretical analysis. We performed both phantom and simulation studies, and analyzed the bias and variance of volume measurements estimated by a matched-filter-based estimator. We further corroborated results with a theoretical analysis to estimate the achievable performance bound, which was the Cramer-Rao’s lower bound (CRLB) of minimum variance for the size estimates. Results showed that estimates of non-attached solid small lesion volumes with object-to-background contrast of 31-46 HU can be accurate and precise, with less than 10.8% in percent bias and 4.8% in standard deviation of percent error (SPE), in standard dose scans. These results are consistent with theoretical (CRLB), computational (simulation) and empirical phantom bounds. The difference between the bounds is rather small (for SPE less than 1.9%) indicating that the theoretical- and simulation-based performance bounds can be good surrogates for physical phantom studies.

  10. Comparison of Bioimpedance and Dual-Energy X-Ray Absorptiometry for Measurement of Fat Mass in Hemodialysis Patients

    PubMed Central

    Molfino, Alessio; Don, Burl R.; Kaysen, George A.

    2014-01-01

    Background Fat mass (FM) is measured with dual-energy X-ray absorptiometry (DXA), but is expensive and not portable. Multifrequency bioimpedance spectroscopy (BIS) measures total body water (TBW) and intracellular and extracellular water (ICW, ECW). Fat mass (FM) is calculated subtracting Fat Free Mass (FFM) from weight assuming fractional hydration of FFM of 0.73. Hemodialysis patients (HD), however, have non physiologic expansion of ECW. Our aim was to apply a model to estimate FM in HD and controls. Methods We estimated the hydration of FFM in healthy subjects (C) and HD with BIS (Impedimed multifrequency) assuming hydration of 0.73 or using a formula allowing ECW and ICW to vary, deriving a value for FM accounting for variances in ECW and ICW. FM was measured by DXA (Hologic Discovery W) in 25 C and in 11 HD. We measured TBW, ECW and ICW with BIS and calculated FM using either Weight - TBW/.73 or with a formula accounting for variations in ECW/ICW to estimate FM. Results ECW/ICW was greater in HD than in C (0.83±0.08 vs 0.76± 0.04; p=0.001). FM (Kg) measured by DXA, or estimated from TBW using constant hydration or accounting for variations in ECW/ICW were not significantly different in C or in HD. Values obtained by all methods correlated (p<0.001) and none of the Bland-Altman plots regressed (r2=0.00). FM measured by DXA and by BIS in both C and HD combined correlated (r2=0.871). Conclusion Expansion of ECW in HD is statistically significant, however the effect on hydration of FFM is insufficient to cause significant deviation from values derived using a hydration value of 0.73 within the range of expansion of ECW in the HD population studied here. PMID:23689544

  11. Theoretical estimation of equilibrium sulfur isotope fractionations among aqueous sulfite species: Implications for isotope models of microbial sulfate reduction

    NASA Astrophysics Data System (ADS)

    Eldridge, D. L.; Farquhar, J.; Guo, W.

    2015-12-01

    Sulfite (sensu lato), an intermediate in a variety sulfur redox processes, plays a particularly important role in microbial sulfate reduction. It exists intracellularly as multiple species between sets of enzymatic reactions that transform sulfate to sulfide, with the exact speciation depending on pH, T, and ionic strength. However, the complex speciation of sulfite is ignored in current isotope partitioning models of microbial sulfate reduction and simplified solely to the pyramidal SO32- (sulfite sensu stricto), due to a lack of appropriate constraints. We theoretically estimated the equilibrium sulfur isotope fractionations (33S/32S, 34S/32S, 36S/32S) among all documented sulfite species in aqueous solution, including sulfite (SO32-), bisulfite isomers and dimers ((HS)O3-, (HO)SO2-, S2O52-), and SO2(aq), through first principles quantum mechanical calculations. The calculations were performed at B3LYP/6-31+G(d,p) level using cluster models with 30-40 water molecules surrounding the solute. Our calculated equilibrium fractionation factors compare well to the available experimental constraints and suggest that the minor and often-ignored tetrahedral (HS)O3- isomer of bisulfite strongly influences isotope partitioning behavior in the sulfite system under most environmentally relevant conditions, particularly fractionation magnitudes and unusual temperature dependence. For example, we predict that sulfur isotope fractionation between sulfite and bulk bisulfite in solution should have an apparent inverse temperature dependence due to the influence of (HS)O3- and its increased stability at higher temperatures. Our findings highlight the need to appropriately account for speciation/isomerization of sulfur species in sulfur isotope studies. We will also present similar calculation results of other aqueous sulfur compounds (e.g., H2S/HS-, SO42-, S2O32-, S3O62-, and poorly documented SO22- species), and discuss the implication of our results for microbial sulfate

  12. Evaluation of bioimpedance for the measurement of physiologic variables as related to hemodynamic studies in space flight

    NASA Technical Reports Server (NTRS)

    Taylor, Bruce C.

    1993-01-01

    Orthostatic intolerance, following space flight, has received substantial attention because of the possibility that it compromises astronaut safety and reduces the ability of astronauts to function at peak performance levels upon return to a one-g environment. Many pre- and post-flight studies are performed to evaluate changes in hemodynamic responses to orthostatic challenges after shuttle missions. The purpose of this present project is to validate bioimpedance as a means to acquire stroke volume and other hemodynamic information in these studies. In this study, ten male and ten female subjects were subjected to simultaneous measurements of thoracic bioimpedance and Doppler ultrasonic velocimetry under supine, 10 degree head down and 30 degree head up conditions. Paired measurements were made during six periods of five seconds breath holding, over a two minute period, for each of the three positions. Stroke volume was calculated by three bioimpedance techniques and ultrasonic Doppler.

  13. Reduction of anisotropy influence and contacting effects in in-vitro bioimpedance measurements

    NASA Astrophysics Data System (ADS)

    Guermazi, M.; Kanoun, O.; Derbel, N.

    2013-04-01

    Experimental procedure is a decisive part in in-vitro bioimpedance measurement in order to get reproducible measurements. An electrode configuration is proposed to avoid several disadvantages produced by needle electrodes and circular non-penetrating electrode. The proposed electrode geometry reduces the influence of anisotropy and allows simultaneously a good probe contacting. We propose an experimental method to avoid the appearance of bacteria and to reduce water loss in meat during experiment post-mortem. The results show that electrode configuration with the developed experimental method have ensured reproducible measurements during a long period of 14 days post-mortem.

  14. Prediction of limb lean tissue mass from bioimpedance spectroscopy in persons with chronic spinal cord injury

    PubMed Central

    Cirnigliaro, Christopher M.; La Fountaine, Michael F.; Emmons, Racine; Kirshblum, Steven C.; Asselin, Pierre; Spungen, Ann M.; Bauman, William A.

    2013-01-01

    Background Bioimpedance spectroscopy (BIS) is a non-invasive, simple, and inexpensive modality that uses 256 frequencies to determine the extracellular volume impedance (ECVRe) and intracellular volume impedance (ICVRi) in the total body and regional compartments. As such, it may have utility as a surrogate measure to assess lean tissue mass (LTM). Objective To compare the relationship between LTM from dual-energy X-ray absorptiometry (DXA) and BIS impedance values in spinal cord injury (SCI) and able-bodied (AB) control subjects using a cross-sectional research design. Methods In 60 subjects (30 AB and 30 SCI), a total body DXA scan was used to obtain total body and leg LTM. BIS was performed to measure the impedance quotient of the ECVRe and ICVRi in the total body and limbs. Results BIS-derived ECVRe yielded a model for LTM in paraplegia, tetraplegia, and control for the right leg (RL) (R2 = 0.75, standard errors of estimation (SEE) = 1.02 kg, P < 0.0001; R2 = 0.65, SEE = 0.91 kg, P = 0.0006; and R2 = 0.54, SEE = 1.31 kg, P < 0.0001, respectively) and left leg (LL) (R2 = 0.76, SEE = 1.06 kg, P < 0.0001; R2 = 0.64, SEE = 0.83 kg, P = 0.0006; and R2 = 0.54, SEE = 1.34 kg, P < 0.0001, respectively). The ICVRi was similarly predictive of LTM in paraplegia, tetraplegia, and AB controls for the RL (R2 = 0.85, SEE = 1.31 kg, P < 0.0001; R2 = 0.52, SEE = 0.95 kg, P = 0.003; and R2 = 0.398, SEE = 1.46 kg, P = 0.0003, respectively) and LL (R2 = 0.62, SEE = 1.32 kg, P = 0.0003; R2 = 0.57, SEE = 0.91 kg, P = 0.002; and R2 = 0.42, SEE = 1.31 kg, P = 0.0001, respectively). Conclusion Findings demonstrate that the BIS-derived impedance quotients for ECVRe and ICVRi may be used as surrogate markers to track changes in leg LTM in persons with SCI. PMID:23941792

  15. Non-contact multi-frequency magnetic induction spectroscopy system for industrial-scale bio-impedance measurement

    NASA Astrophysics Data System (ADS)

    O'Toole, M. D.; Marsh, L. A.; Davidson, J. L.; Tan, Y. M.; Armitage, D. W.; Peyton, A. J.

    2015-03-01

    Biological tissues have a complex impedance, or bio-impedance, profile which changes with respect to frequency. This is caused by dispersion mechanisms which govern how the electromagnetic field interacts with the tissue at the cellular and molecular level. Measuring the bio-impedance spectra of a biological sample can potentially provide insight into the sample’s properties and its cellular structure. This has obvious applications in the medical, pharmaceutical and food-based industrial domains. However, measuring the bio-impedance spectra non-destructively and in a way which is practical at an industrial scale presents substantial challenges. The low conductivity of the sample requires a highly sensitive instrument, while the demands of industrial-scale operation require a fast high-throughput sensor of rugged design. In this paper, we describe a multi-frequency magnetic induction spectroscopy (MIS) system suitable for industrial-scale, non-contact, spectroscopic bio-impedance measurement over a bandwidth of 156 kHz-2.5 MHz. The system sensitivity and performance are investigated using calibration and known reference samples. It is shown to yield rapid and consistently sensitive results with good long-term stability. The system is then used to obtain conductivity spectra of a number of biological test samples, including yeast suspensions of varying concentration and a range of agricultural produce, such as apples, pears, nectarines, kiwis, potatoes, oranges and tomatoes.

  16. Bioimpedance Harmonic Analysis as a Diagnostic Tool to Assess Regional Circulation and Neural Activity

    NASA Astrophysics Data System (ADS)

    Mudraya, I. S.; Revenko, S. V.; Khodyreva, L. A.; Markosyan, T. G.; Dudareva, A. A.; Ibragimov, A. R.; Romich, V. V.; Kirpatovsky, V. I.

    2013-04-01

    The novel technique based on harmonic analysis of bioimpedance microvariations with original hard- and software complex incorporating a high-resolution impedance converter was used to assess the neural activity and circulation in human urinary bladder and penis in patients with pelvic pain, erectile dysfunction, and overactive bladder. The therapeutic effects of shock wave therapy and Botulinum toxin detrusor injections were evaluated quantitatively according to the spectral peaks at low 0.1 Hz frequency (M for Mayer wave), respiratory (R) and cardiac (C) rhythms with their harmonics. Enhanced baseline regional neural activity identified according to M and R peaks was found to be presumably sympathetic in pelvic pain patients, and parasympathetic - in patients with overactive bladder. Total pulsatile activity and pulsatile resonances found in the bladder as well as in the penile spectrum characterised regional circulation and vascular tone. The abnormal spectral parameters characteristic of the patients with genitourinary diseases shifted to the norm in the cases of efficient therapy. Bioimpedance harmonic analysis seems to be a potent tool to assess regional peculiarities of circulatory and autonomic nervous activity in the course of patient treatment.

  17. Gastric Tissue Damage Analysis Generated by Ischemia: Bioimpedance, Confocal Endomicroscopy, and Light Microscopy

    PubMed Central

    Beltran, Nohra E.; Garcia, Laura E.; Garcia-Lorenzana, Mario

    2013-01-01

    The gastric mucosa ischemic tissular damage plays an important role in critical care patients' outcome, because it is the first damaged tissue by compensatory mechanism during shock. The aim of the study is to relate bioimpedance changes with tissular damage level generated by ischemia by means of confocal endomicroscopy and light microscopy. Bioimpedance of the gastric mucosa and confocal images were obtained from Wistar male rats during basal and ischemia conditions. They were anesthetized, and stain was applied (fluorescein and/or acriflavine). The impedance spectroscopy catheter was inserted and then confocal endomicroscopy probe. After basal measurements and biopsy, hepatic and gastric arteries clamping induced ischemia. Finally, pyloric antrum tissue was preserved in buffered formaldehyde (10%) for histology processing using light microscopy. Confocal images were equalized, binarized, and boundary defined, and infiltrations were quantified. Impedance and infiltrations increased with ischemia showing significant changes between basal and ischemia conditions (P < 0.01). Light microscopy analysis allows detection of general alterations in cellular and tissular integrity, confirming gastric reactance and confocal images quantification increments obtained during ischemia. PMID:23841094

  18. Extraction of Cole parameters from the electrical bioimpedance spectrum using stochastic optimization algorithms.

    PubMed

    Gholami-Boroujeny, Shiva; Bolic, Miodrag

    2016-04-01

    Fitting the measured bioimpedance spectroscopy (BIS) data to the Cole model and then extracting the Cole parameters is a common practice in BIS applications. The extracted Cole parameters then can be analysed as descriptors of tissue electrical properties. To have a better evaluation of physiological or pathological properties of biological tissue, accurate extraction of Cole parameters is of great importance. This paper proposes an improved Cole parameter extraction based on bacterial foraging optimization (BFO) algorithm. We employed simulated datasets to test the performance of the BFO fitting method regarding parameter extraction accuracy and noise sensitivity, and we compared the results with those of a least squares (LS) fitting method. The BFO method showed better robustness to the noise and higher accuracy in terms of extracted parameters. In addition, we applied our method to experimental data where bioimpedance measurements were obtained from forearm in three different positions of the arm. The goal of the experiment was to explore how robust Cole parameters are in classifying position of the arm for different people, and measured at different times. The extracted Cole parameters obtained by LS and BFO methods were applied to different classifiers. Two other evolutionary algorithms, GA and PSO were also used for comparison purpose. We showed that when the classifiers are fed with the extracted feature sets by BFO fitting method, higher accuracy is obtained both when applying on training data and test data.

  19. Method and device for bio-impedance measurement with hard-tissue applications.

    PubMed

    Guimerà, A; Calderón, E; Los, P; Christie, A M

    2008-06-01

    Bio-impedance measurements can be used to detect and monitor several properties of living hard-tissues, some of which include bone mineral density, bone fracture healing or dental caries detection. In this paper a simple method and hardware architecture for hard tissue bio-impedance measurement is proposed. The key design aspects of such architecture are discussed and a commercial handheld ac impedance device is presented that is fully certified to international medical standards. It includes a 4-channel multiplexer and is capable of measuring impedances from 10 kOmega to 10 MOmega across a frequency range of 100 Hz to 100 kHz with a maximum error of 5%. The device incorporates several user interface methods and a Bluetooth link for bi-directional wireless data transfer. Low-power design techniques have been implemented, ensuring the device exceeds 8 h of continuous use. Finally, bench test results using dummy cells consisting of parallel connected resistors and capacitors, from 10 kOmega to 10 MOmega and from 20 pF to 100 pF, are discussed.

  20. Longitudinal and transversal bioimpedance measurements in addition to diagnosis of heart failure

    NASA Astrophysics Data System (ADS)

    Ribas, N.; Nescolarde, L.; Domingo, M.; Gastelurrutia, P.; Bayés-Genis, A.; Rosell-Ferrer, J.

    2010-04-01

    Heart Failure (HF) is a clinical syndrome characterised by signs of systemic and pulmonary fluid retention, shortness of breath and/or fatigue. There is a lack of reliable indicators of disease state. Benefits and applicability of non-invasive bioimpedance measurement in the hydration state of soft tissues have been validated, fundamentally, in dialysis patients. Four impedance configurations (2 longitudinal and 2 transversal) were analyzed in 48 HF patients (M=28, F=20) classified according to a clinical disease severity score (CDSS) derived from the Framingham criteria: CDSS<=2 (G1: M = 23, F = 14) and CDSS>2 (G2: M = 5, F = 6). The aim of this study is to analyze longitudinal and transversal bioimpedance measurement at 50 kHz, in addition to clinical diagnosis parameters of heart failure, including: clinical disease severity score (CDSS) and a biomarker concentrations (NT-proBNP). The Kolmogorov-Smirnov test was used for the normality test of all variables. The CDSS, NTproBNP and impedance parameters between groups (G1 and G2) were compared by mean of Mann Withney U-test. The statistical significance was considered with P < 0.05. Whole-body impedance measured was analyzed using RXc graph.

  1. Estimating associations of mobile phone use and brain tumours taking into account laterality: a comparison and theoretical evaluation of applied methods.

    PubMed

    Frederiksen, Kirsten; Deltour, Isabelle; Schüz, Joachim

    2012-12-10

    Estimating exposure-outcome associations using laterality information on exposure and on outcome is an issue, when estimating associations of mobile phone use and brain tumour risk. The exposure is localized; therefore, a potential risk is expected to exist primarily on the side of the head, where the phone is usually held (ipsilateral exposure), and to a lesser extent at the opposite side of the head (contralateral exposure). Several measures of the associations with ipsilateral and contralateral exposure, dealing with different sampling designs, have been presented in the literature. This paper presents a general framework for the analysis of such studies using a likelihood-based approach in a competing risks model setting. The approach clarifies the implicit assumptions required for the validity of the presented estimators, particularly that in some approaches the risk with contralateral exposure is assumed to be zero. The performance of the estimators is illustrated in a simulation study showing for instance that while in some scenarios there is a loss of statistical power, others - in case of a positive ipsilateral exposure-outcome association - would result in a negatively biased estimate of the contralateral exposure parameter, irrespective of any additional recall bias. In conclusion, our theoretical evaluations and results from the simulation study emphasize the importance of setting up a formal model, which furthermore allows for estimation in more complicated and perhaps more realistic exposure settings, such as taking into account exposure to both sides of the head.

  2. Evaluating the reliability of different preprocessing steps to estimate graph theoretical measures in resting state fMRI data.

    PubMed

    Aurich, Nathassia K; Alves Filho, José O; Marques da Silva, Ana M; Franco, Alexandre R

    2015-01-01

    With resting-state functional MRI (rs-fMRI) there are a variety of post-processing methods that can be used to quantify the human brain connectome. However, there is also a choice of which preprocessing steps will be used prior to calculating the functional connectivity of the brain. In this manuscript, we have tested seven different preprocessing schemes and assessed the reliability between and reproducibility within the various strategies by means of graph theoretical measures. Different preprocessing schemes were tested on a publicly available dataset, which includes rs-fMRI data of healthy controls. The brain was parcellated into 190 nodes and four graph theoretical (GT) measures were calculated; global efficiency (GEFF), characteristic path length (CPL), average clustering coefficient (ACC), and average local efficiency (ALE). Our findings indicate that results can significantly differ based on which preprocessing steps are selected. We also found dependence between motion and GT measurements in most preprocessing strategies. We conclude that by using censoring based on outliers within the functional time-series as a processing, results indicate an increase in reliability of GT measurements with a reduction of the dependency of head motion.

  3. Improved estimates of the pion-photon transition form factor in the (1 ≤ Q2 ≤ 5) GeV2 range and their theoretical uncertainties

    NASA Astrophysics Data System (ADS)

    Mikhailov, S. V.; Pimikov, A. V.; Stefanis, N. G.

    2017-03-01

    We consider the pion-photon transition form factor at low to intermediate spacelike momenta within the theoretical framework of light-cone sum rules. We derive predictions which take into account all currently known contributions stemming from QCD perturbation theory up to the next-to-next-to-leading order (NNLO) and by including all twist terms up to order six. In order to enable a more detailed comparison with forthcoming high-precision data, we also estimate the main systematic theoretical uncertainties, stemming from various sources, and discuss their influence on the calculations — in particular the dominant one related to the still uncalculated part of the NNLO contribution. The analysis addresses, in broad terms, also the role of the twist-two pion distribution amplitude derived with different approaches.

  4. Removing respiratory artefacts from transthoracic bioimpedance spectroscopy measurements

    NASA Astrophysics Data System (ADS)

    Cuba-Gyllensten, I.; Abtahi, F.; Bonomi, A. G.; Lindecrantz, K.; Seoane, F.; Amft, O.

    2013-04-01

    Transthoracic impedance spectroscopy (TIS) measurements from wearable textile electrodes provide a tool to remotely and non-invasively monitor patient health. However, breathing and cardiac processes inevitably affect TIS measurements, since they are sensitive to changes in geometry and air or fluid volumes in the thorax. This study aimed at investigating the effect of respiration on Cole parameters extracted from TIS measurements and developing a method to suppress artifacts. TIS data were collected from 10 participants at 16 frequencies (range: 10 kHz - 1 MHz) using a textile electrode system (Philips Technologie Gmbh). Simultaneously, breathing volumes and frequency were logged using an electronic spirometer augmented with data from a breathing belt. The effect of respiration on TIS measurements was studied at paced (10 and 16 bpm) deep and shallow breathing. These measurements were repeated for each subject in three different postures (lying down, reclining and sitting). Cole parameter estimation was improved by assessing the tidal expiration point thus removing breathing artifacts. This leads to lower intra-subject variability between sessions and a need for less measurements points to accurately assess the spectra. Future work should explore algorithmic artifacts compensation models using breathing and posture or patient contextual information to improve ambulatory transthoracic impedance measurements.

  5. Theoretical estimation of the transonic aerodynamic characteristics of a supercritical-wing transport model with trailing-edge controls

    NASA Technical Reports Server (NTRS)

    Luckring, J. M.; Mann, M. J.

    1978-01-01

    A method for rapidly estimating the overall forces and moments at supercritical speeds, below drag divergence, of transport configurations with supercritical wings is presented. The method was also used for estimating the rolling moments due to the deflection of wing trailing-edge controls. This analysis was based on a vortex-lattice technique modified to approximate the effects of wing thickness and boundary-layer induced camber. Comparisons between the results of this method and experiment indicate reasonably good correlation of the lift, pitching moment, and rolling moment. The method required much less storage and run time to compute solutions over an angle-of-attack range than presently available transonic nonlinear methods require for a single angle-of-attack solution.

  6. Estimating pathway-specific contributions to biodegradation in aquifers based on dual isotope analysis: theoretical analysis and reactive transport simulations.

    PubMed

    Centler, Florian; Heße, Falk; Thullner, Martin

    2013-09-01

    At field sites with varying redox conditions, different redox-specific microbial degradation pathways contribute to total contaminant degradation. The identification of pathway-specific contributions to total contaminant removal is of high practical relevance, yet difficult to achieve with current methods. Current stable-isotope-fractionation-based techniques focus on the identification of dominant biodegradation pathways under constant environmental conditions. We present an approach based on dual stable isotope data to estimate the individual contributions of two redox-specific pathways. We apply this approach to carbon and hydrogen isotope data obtained from reactive transport simulations of an organic contaminant plume in a two-dimensional aquifer cross section to test the applicability of the method. To take aspects typically encountered at field sites into account, additional simulations addressed the effects of transverse mixing, diffusion-induced stable-isotope fractionation, heterogeneities in the flow field, and mixing in sampling wells on isotope-based estimates for aerobic and anaerobic pathway contributions to total contaminant biodegradation. Results confirm the general applicability of the presented estimation method which is most accurate along the plume core and less accurate towards the fringe where flow paths receive contaminant mass and associated isotope signatures from the core by transverse dispersion. The presented method complements the stable-isotope-fractionation-based analysis toolbox. At field sites with varying redox conditions, it provides a means to identify the relative importance of individual, redox-specific degradation pathways.

  7. Cooperative dry-electrode sensors for multi-lead biopotential and bioimpedance monitoring.

    PubMed

    Rapin, M; Proença, M; Braun, F; Meier, C; Solà, J; Ferrario, D; Grossenbacher, O; Porchet, J-A; Chételat, O

    2015-04-01

    Cooperative sensors is a novel measurement architecture that allows the acquiring of biopotential signals on patients in a comfortable and easy-to-integrate manner. The novel sensors are defined as cooperative in the sense that at least two of them work in concert to measure a target physiological signal, such as a multi-lead electrocardiogram or a thoracic bioimpedance.This paper starts by analysing the state-of-the-art methods to simultaneously measure biopotential and bioimpedance signals, and justifies why currently (1) passive electrodes require the use of shielded or double-shielded cables, and (2) active electrodes require the use of multi-wired cabled technologies, when aiming at high quality physiological measurements.In order to overcome the limitations of the state-of-the-art, a new method for biopotential and bioimpedance measurement using the cooperative sensor is then presented. The novel architecture allows the acquisition of the aforementioned biosignals without the need of shielded or multi-wire cables by splitting the electronics into separate electronic sensors comprising each of two electrodes, one for voltage measurement and one for current injection. The sensors are directly in contact with the skin and connected together by only one unshielded wire. This new configuration requires one power supply per sensor and all sensors need to be synchronized together to allow them to work in concert.After presenting the working principle of the cooperative sensor architecture, this paper reports first experimental results on the use of the technology when applied to measuring multi-lead ECG signals on patients. Measurements performed on a healthy patient demonstrate the feasibility of using this novel cooperative sensor architecture to measure biopotential signals and compliance with common mode rejection specification accordingly to international standard (IEC 60601-2-47) has also been assessed.By reducing the need of using complex wiring setups, and

  8. Effects of metoclopramide on gastric motility measured by short-term bio-impedance

    PubMed Central

    Huerta-Franco, María-Raquel; Vargas-Luna, Miguel; Capaccione, Kathleen M; Yañez-Roldán, Etna; Hernández-Ledezma, Ulises; Morales-Mata, Ismael; Córdova-Fraga, Teodoro

    2009-01-01

    AIM: To analyze the accuracy of short-term bio-impedance as a means of measuring gastric motility. METHODS: We evaluated differences in the short-term electrical bio-impedance signal from the gastric region in the following conditions: (1) fasting state, (2) after the administration of metoclopramide (a drug that induces an increase in gastric motility) and (3) after food ingestion in 23 healthy volunteers. We recorded the real component of the electrical impedance signal from the gastric region for 1000 s. We performed a Fast Fourier Transform (FFT) on this data and then compared the signal among the fasting, medicated, and postprandial conditions using the median of the area under the curve, the relative area under the curve and the main peak activity. RESULTS: The median of the area under the curve of the frequency range in the region between 2-8 cycles per minute (cpm) decreased from 4.7 cpm in the fasting condition to 4.0 cpm in the medicated state (t = 3.32, P = 0.004). This concurred with the decrease seen in the relative area under the FFT curve in the region from 4 to 8 cpm from 38.3% to 26.6% (t = 2.81, P = 0.012) and the increase in area in the region from 2 to 4 cpm from 22.4% to 27.7%, respectively (t = -2.5, P = 0.022). Finally the main peak position also decreased in the region from 2 to 8 cpm. Main peak activity in the fasting state was 4.72 cpm and declined to 3.45 cpm in the medicated state (t = 2.47, P = 0.025). There was a decrease from the fasting state to the postprandial state at 3.02 cpm (t = 4.0, P = 0.0013). CONCLUSION: Short-term electrical bio-impedance can assess gastric motility changes in individuals experiencing gastric stress by analyzing the area medians and relative areas under the FFT curve. PMID:19824108

  9. Theoretical geodesy

    NASA Astrophysics Data System (ADS)

    Borkowski, Andrzej; Kosek, Wiesław

    2015-12-01

    The paper presents a summary of research activities concerning theoretical geodesy performed in Poland in the period of 2011-2014. It contains the results of research on new methods of the parameter estimation, a study on robustness properties of the M-estimation, control network and deformation analysis, and geodetic time series analysis. The main achievements in the geodetic parameter estimation involve a new model of the M-estimation with probabilistic models of geodetic observations, a new Shift-Msplit estimation, which allows to estimate a vector of parameter differences and the Shift-Msplit(+) that is a generalisation of Shift-Msplit estimation if the design matrix A of a functional model has not a full column rank. The new algorithms of the coordinates conversion between the Cartesian and geodetic coordinates, both on the rotational and triaxial ellipsoid can be mentioned as a highlights of the research of the last four years. New parameter estimation models developed have been adopted and successfully applied to the control network and deformation analysis. New algorithms based on the wavelet, Fourier and Hilbert transforms were applied to find time-frequency characteristics of geodetic and geophysical time series as well as time-frequency relations between them. Statistical properties of these time series are also presented using different statistical tests as well as 2nd, 3rd and 4th moments about the mean. The new forecasts methods are presented which enable prediction of the considered time series in different frequency bands.

  10. Automatic control of a drop-foot stimulator based on angle measurement using bioimpedance.

    PubMed

    Nahrstaedt, Holger; Schauer, Thomas; Shalaby, Raafat; Hesse, Stefan; Raisch, Jörg

    2008-08-01

    The topic of this contribution is iterative learning control of a drop-foot stimulator in which a predefined angle profile during the swing phase is realized. Ineffective dorsiflexion is compensated by feedback-controlled stimulation of the muscle tibialis anterior. The ankle joint measurement is based on changes in the bioimpedance (BI) caused by leg movements. A customized four-channel BI measurement system was developed. The suggested control approach and the new measurement method for the joint angle were successfully tested in preliminary experiments with a neurologically intact subject. Reference angle measurements were taken with a marker-based optical system. An almost linear relation between joint angle and BI was found for the angle range applicable during gait. The desired angle trajectory was closely tracked by the iterative learning controller after three gait cycles. The final root mean square tracking error was below 5 degrees.

  11. Design and characterization of a multi-frequency bioimpedance measurement prototype

    NASA Astrophysics Data System (ADS)

    Mattia Neto, O. E.; Porto, R. W.; Aya, J. C. C.

    2012-12-01

    A multi-frequency bioimpedance measurement prototype is proposed, validated and characterized. It consists of an Improved Howland Current Source controlled by voltage, a load voltage sensing scheme through a discrete 3-opamp instrumentation amplifier, a phase and quadrature demodulation setup through analog multipliers, and digitization and processing of the signals using a digital benchtop multimeter. The electrical characterization of the measurement channel was done for resistive loads only, on four different circuits. Measurements were made on 10 frequencies, from 100 kHz to 1 MHz, with 10 load resistances, from 100 Ω to 1 kΩ, to obtain linearity, absolute error and frequency response. The best performance among the four circuits was a maximum absolute error of 5.55 %, and -1.93 % of load current variation at the worst case scenario.

  12. Operational research in primary health care planning: a theoretical model for estimating the coverage achieved by different distributions of staff and facilities

    PubMed Central

    Kemball-Cook, D.; Vaughan, J. P.

    1983-01-01

    This report outlines a basic operational research model for estimating the coverage achieved by different distributions of primary health care staff and facilities, using antenatal home visiting as an illustrative example. Coverage is estimated in terms of the average number of patient contacts achieved per annum. The model takes into account such features as number of facilities and health workers per 10 000 population, the radius of the health facility area, the overall population density in the region, the number of working days in the year, and the health worker's travelling time and work rate. A theoretical planning situation is also presented, showing the application of the model in defining various possible strategies, using certain planning norms for new levels of staff and facilities. This theoretical model is presented as an example of the use of operational research in primary health care, but it requires to be tested and validated in known situations before its usefulness can be assessed. Some indications are given of the ways in which the model could be adapted and improved for application to a real planning situation. PMID:6602666

  13. Photophysical properties of thiadiazoles derivative: Estimation of ground and excited state dipole moments by theoretical and experimental approach

    NASA Astrophysics Data System (ADS)

    Muddapur, G. V.; Koppal, V. V.; Patil, N. R.; Melavanki, R. M.

    2016-05-01

    The absorption and fluorescence spectra of newly synthesized thiadiazole derivative namely 6-(4-chlorophenyl)-2-(naphthalene-1-ylmethyl) imidazo [2, 1-b][1,3,4] thiadiazole [6CNMT] have been recorded in various solvents of different polarities. The ground state dipole moment of 6CNMT was obtained from quantum chemical calculations. Solvatochromic correlations were used to estimate the ground state (μg) and excited state (μe) dipole moments. The excited state dipole moments are observed to be greater than the ground state dipole moment. Further, the changes in dipole moment (Δμ) were calculated both from solvatochromic shift method and microscopic solvent polarity parameter (ETN ) and the values are compared. The spectral variations were also analyzed by Kamlet-Taft parameters.

  14. Consistency of aortic distensibility and pulse wave velocity estimates with respect to the Bramwell-Hill theoretical model: a cardiovascular magnetic resonance study

    PubMed Central

    2011-01-01

    Background Arterial stiffness is considered as an independent predictor of cardiovascular mortality, and is increasingly used in clinical practice. This study aimed at evaluating the consistency of the automated estimation of regional and local aortic stiffness indices from cardiovascular magnetic resonance (CMR) data. Results Forty-six healthy subjects underwent carotid-femoral pulse wave velocity measurements (CF_PWV) by applanation tonometry and CMR with steady-state free-precession and phase contrast acquisitions at the level of the aortic arch. These data were used for the automated evaluation of the aortic arch pulse wave velocity (Arch_PWV), and the ascending aorta distensibility (AA_Distc, AA_Distb), which were estimated from ascending aorta strain (AA_Strain) combined with either carotid or brachial pulse pressure. The local ascending aorta pulse wave velocity AA_PWVc and AA_PWVb were estimated respectively from these carotid and brachial derived distensibility indices according to the Bramwell-Hill theoretical model, and were compared with the Arch_PWV. In addition, a reproducibility analysis of AA_PWV measurement and its comparison with the standard CF_PWV was performed. Characterization according to the Bramwell-Hill equation resulted in good correlations between Arch_PWV and both local distensibility indices AA_Distc (r = 0.71, p < 0.001) and AA_Distb (r = 0.60, p < 0.001); and between Arch_PWV and both theoretical local indices AA_PWVc (r = 0.78, p < 0.001) and AA_PWVb (r = 0.78, p < 0.001). Furthermore, the Arch_PWV was well related to CF_PWV (r = 0.69, p < 0.001) and its estimation was highly reproducible (inter-operator variability: 7.1%). Conclusions The present work confirmed the consistency and robustness of the regional index Arch_PWV and the local indices AA_Distc and AA_Distb according to the theoretical model, as well as to the well established measurement of CF_PWV, demonstrating the relevance of the regional and local CMR indices. PMID

  15. Empirical estimates and theoretical predictions of the shorting factor for the THEMIS double-probe electric field instrument

    NASA Astrophysics Data System (ADS)

    Califf, S.; Cully, C. M.

    2016-07-01

    Double-probe electric field measurements on board spacecraft present significant technical challenges, especially in the inner magnetosphere where the ambient plasma characteristics can vary dramatically and alter the behavior of the instrument. We explore the shorting factor for the Time History of Events and Macroscale Interactions during Substorms electric field instrument, which is a scale factor error on the measured electric field due to coupling between the sensing spheres and the long wire booms, using both an empirical technique and through simulations with varying levels of fidelity. The empirical data and simulations both show that there is effectively no shorting when the spacecraft is immersed in high-density plasma deep within the plasmasphere and that shorting becomes more prominent as plasma density decreases and the Debye length increases outside the plasmasphere. However, there is a significant discrepancy between the data and theory for the shorting factor in low-density plasmas: the empirical estimate indicates ~0.7 shorting for long Debye lengths, but the simulations predict a shorting factor of ~0.94. This paper systematically steps through the empirical and modeling methods leading to the disagreement with the intention of motivating further study on the topic.

  16. A comparative study of nano-scale coatings on gold electrodes for bioimpedance studies of breast cancer cells.

    PubMed

    Srinivasaraghavan, Vaishnavi; Strobl, Jeannine; Wang, Dong; Heflin, James R; Agah, Masoud

    2014-10-01

    The relative sensitivity of standard gold microelectrodes for electric cell-substrate impedance sensing was compared with that of gold microelectrodes coated with gold nanoparticles, carbon nanotubes, or electroplated gold to introduce nano-scale roughness on the surface of the electrodes. For biological solutions, the electroplated gold coated electrodes had significantly higher sensitivity to changes in conductivity than electrodes with other coatings. In contrast, the carbon nanotube coated electrodes displayed the highest sensitivity to MDA-MB-231 metastatic breast cancer cells. There was also a significant shift in the peak frequency of the cancer cell bioimpedance signal based on the type of electrode coating. The results indicate that nano-scale coatings which introduce varying degrees of surface roughness can be used to modulate the frequency dependent sensitivity of the electrodes and optimize electrode sensitivity for different bioimpedance sensing applications.

  17. Theoretical estimation for equilibrium Mo isotope fractionations between dissolved Mo species and the adsorbed complexes on (Fe,Mn)-oxyhydroxides

    NASA Astrophysics Data System (ADS)

    Tang, M.; Liu, Y.

    2009-12-01

    Although Mo isotopes have been increasingly used as a paleoredox proxy in the study of paleo-oceanographic condition changes (Barling et al., 2001; Siebert et al., 2003, 2005,2006; Arnold et al., 2004; Poulson et al., 2006), some very basic aspects of Mo isotopes geochemistry have not been obtained yet. First, although there are several previous studies on equilibrium Mo isotope fractionation factors(Tossell,2005; Weeks et al.,2007; Wasylenki et al.,2008), these studies were dealing with situations in vacuum and we find unfortunately the solvation effects for Ge species in solution cannot be ignored. Therefore, accurate Ge fractionation factors are actually not determined yet. Second, except the dominant dissolved Mo species in seawater which is known as molybdate ion (MoO42-), the forms of possible other minor species remain elusive. Third, the Mo removal mechanisms from seawater are only known for the anoxia and euxinic conditions (e.g. Helz et al., 1996; Zheng et al., 2000), the Mo removal mechanism under oxic condition are still arguing. Fourth, the adsorption effects on Mo isotope fractionation are almost completely unknown. Especially, without the adsorption fractionation knowledge, it is difficult to understand many distinct fractionations found in a number of geologic systems and it is difficult to explain the exceptionally long residence time of Mo in seawater. Urey model or Bigeleisen-Mayer equation based theoretical method and the super-molecule clusters are used to precisely evaluate the fractionation factors. The B3LYP/(6-311+G(2df,p),LANL2DZ) level method is used for frequencies calculation. 24 water molecules are used to form the supermolecues surrounding the Mo species. At least 4 different conformers for each supermolecule are used to prevent the errors from the diversity of configurations in solution. This study provides accurate equilibrium Mo isotope fractionation factors between possible dissolved Mo species and the adsorbed Mo species on the

  18. Bioimpedance analysis and plasma B-type natriuretic peptide assay may cooperate in diagnosing and managing heart failure.

    PubMed

    Tamagno, Gianluca; Guzzon, Samuele

    2006-06-01

    We describe the case of an obese patient presenting leg oedema, progressive oliguria, orthopnoea and mild increased B-type natriuretic peptide (BNP) levels. Bioimpedance analysis (BIA) provided additional data for the interpretation of the plasma BNP values, contributing to the diagnosis of heart failure and the appropriate management of the patient. In our mind, BIA could represent a useful tool for integrating the plasma BNP assay in both diagnosis and management of heart failure.

  19. Systematic estimation of theoretical uncertainties in the calculation of the pion-photon transition form factor using light-cone sum rules

    NASA Astrophysics Data System (ADS)

    Mikhailov, S. V.; Pimikov, A. V.; Stefanis, N. G.

    2016-06-01

    We consider the calculation of the pion-photon transition form factor Fγ*γπ0(Q2) within light-cone sum rules focusing attention to the low-mid region of momenta. The central aim is to estimate the theoretical uncertainties which originate from a wide variety of sources related to (i) the relevance of next-to-next-to-leading order radiative corrections (ii) the influence of the twist-four and the twist-six term (iii) the sensitivity of the results on auxiliary parameters, like the Borel scale M2, (iv) the role of the phenomenological description of resonances, and (v) the significance of a small but finite virtuality of the quasireal photon. Predictions for Fγ*γπ0(Q2) are presented which include all these uncertainties and found to comply within the margin of experimental error with the existing data in the Q2 range between 1 and 5 GeV2 , thus justifying the reliability of the applied calculational scheme. This provides a solid basis for confronting theoretical predictions with forthcoming data bearing small statistical errors.

  20. Theoretical study on the amplitude ratio of the seismoelectric field to the Stoneley wave and the formation tortuosity estimation from seismoelectric logs

    NASA Astrophysics Data System (ADS)

    Guan, Wei; Yin, Chenggang; Wang, Jun; Cui, Naigang; Hu, Hengshan; Wang, Zhi

    2015-12-01

    Seismoelectric logging has potential applications in exploring the porous formation characteristics. In this study, we theoretically address how the amplitudes of the Stoneley wave and its accompanying borehole seismoelectric field depend on the porous formation parameters such as porosity, permeability and tortuosity. We calculate the ratio of the electric field amplitude to the pressure amplitude (REP amplitude) for the low-frequency Stoneley wave of different formations and find that the REP amplitude is sensitive to porosity and tortuosity but insensitive to permeability. To confirm this conclusion, we derive the approximate expression of the REP amplitude in the wavenumber-frequency domain which shows that the REP amplitude is dependent on tortuosity but independent of permeability. This contradicts the result concluded by previous researchers from experiments that the REP amplitude is directly proportional to the permeability. The reason is probably attributed to the fact that the rock samples with different permeabilities typically have different tortuosities. Since the REP amplitude is sensitive to tortuosity, we propose a method of estimating formation tortuosity from seismoelectric logs. It is implemented by using the REP amplitude and the wavenumber of the Stoneley wave at a given frequency when the porosity and the pore fluid viscosity and salinity are known. We test the method by estimating the tortuosities from the synthetic seismoelectric waveforms in different formations. The result shows that the errors relative to the input tortuosities are lower than 5.0 per cent without considering the uncertainties of other input parameters.

  1. A theoretical discussion of the use of the Lineweaver-Burk plot to estimate kinetic parameters of intestinal transport in the presence of unstirred water layers.

    PubMed

    Thomson, A B

    1981-09-01

    Transport of a solute molecule from the bulk phase in the intestinal lumen into the mucosal cells is determined by the rate of movement of the solute molecule across two barriers, the unstirred water layers (UWL) and the microvillus membrane. Failure to account for the effect of the resistance offered by the UWL introduces significant errors into the estimate of kinetic constants of carrier-mediated transport, and these errors may be further magnified by the use of the Lineweaver-Burk plot. This study was under taken to determine use of this plot under conditions that depict the effect of varying the effective resistance of the UWL, the distribution of transport sites along the villus (fn), the passive permeability coefficient (P), the maximal transport rate (Jdm), and the Michaelis constant (Km). Theoretical curves derived from a new equation demonstrate that (1) the Lineweaver-Burk plot is linear under only a limited number of conditions, and even then may lead to serous over- or under-estimation of Jdm and Km; (2) failure to correct for passive permeation may give rise to additional quantitative discrepancies between the true and apparent values of Jdm and Km; and (3) the qualitative characteristics of a carrier-mediated intestinal transport system may be ascertained only after correction for the contribution of passive permeation, and after correction for the effective resistance of the UWL.

  2. Bioimpedance spectroscopy can precisely discriminate human breast carcinoma from benign tumors

    PubMed Central

    Du, Zhenggui; Wan, Hangyu; Chen, Yu; Pu, Yang; Wang, Xiaodong

    2017-01-01

    Abstract Intraoperative frozen pathology is critical when a breast tumor is not diagnosed before surgery. However, frozen tumor tissues always present various microscopic morphologies, leading to a high misdiagnose rate from frozen section examination. Thus, we aimed to identify breast tumors using bioimpedance spectroscopy (BIS), a technology that measures the tissues’ impedance. We collected and measured 976 specimens from breast patients during surgery, including 581 breast cancers, 190 benign tumors, and 205 normal mammary gland tissues. After measurement, Cole-Cole curves were generated by a bioimpedance analyzer and parameters R0/R∞, fc, and α were calculated from the curve. The Cole-Cole curves showed a trend to differentiate mammary gland, benign tumors, and cancer. However, there were some curves overlapped with other groups, showing that it is not an ideal model. Subsequent univariate analysis of R0/R∞, fc, and α showed significant differences between benign tumor and cancer. However, receiver operating characteristic (ROC) analysis indicated the diagnostic value of fc and R0/R∞ were not superior to frozen sections (area under curve [AUC] = 0.836 and 0.849, respectively), and α was useless in diagnosis (AUC = 0.596). After further research, we found a scatter diagram that showed a synergistic effect of the R0/R∞ and fc, in discriminating cancer from benign tumors. Thus, we used multivariate analysis, which revealed that these two parameters were independent predictors, to combine them. A simplified equation, RF′ = 0.2fc + 3.6R0/R∞, based on multivariate analysis was developed. The ROC curve for RF′ showed an AUC = 0.939, and the sensitivity and specificity were 82.62% and 95.79%, respectively. To match a clinical setting, the diagnostic criteria were set at 6.91 and 12.9 for negative and positive diagnosis, respectively. In conclusion, RF′ derived from BIS can discriminate benign tumor and cancers, and integrated criteria

  3. The differential Howland current source with high signal to noise ratio for bioimpedance measurement system

    SciTech Connect

    Liu, Jinzhen; Li, Gang; Lin, Ling; Qiao, Xiaoyan; Wang, Mengjun; Zhang, Weibo

    2014-05-15

    The stability and signal to noise ratio (SNR) of the current source circuit are the important factors contributing to enhance the accuracy and sensitivity in bioimpedance measurement system. In this paper we propose a new differential Howland topology current source and evaluate its output characters by simulation and actual measurement. The results include (1) the output current and impedance in high frequencies are stabilized after compensation methods. And the stability of output current in the differential current source circuit (DCSC) is 0.2%. (2) The output impedance of two current circuits below the frequency of 200 KHz is above 1 MΩ, and below 1 MHz the output impedance can arrive to 200 KΩ. Then in total the output impedance of the DCSC is higher than that of the Howland current source circuit (HCSC). (3) The SNR of the DCSC are 85.64 dB and 65 dB in the simulation and actual measurement with 10 KHz, which illustrates that the DCSC effectively eliminates the common mode interference. (4) The maximum load in the DCSC is twice as much as that of the HCSC. Lastly a two-dimensional phantom electrical impedance tomography is well reconstructed with the proposed HCSC. Therefore, the measured performance shows that the DCSC can significantly improve the output impedance, the stability, the maximum load, and the SNR of the measurement system.

  4. Pulse wave detection method based on the bio-impedance of the wrist

    NASA Astrophysics Data System (ADS)

    He, Jianman; Wang, Mengjun; Li, Xiaoxia; Li, Gang; Lin, Ling

    2016-05-01

    The real-time monitoring of pulse rate can evaluate the heart health to some extent, and the measurement of bio-impedance has the potential in wearable health monitoring system. In this paper, an effective method, which contains self-balancing bridge, flexible electrode, and high-speed digital lock-in algorithm (DLIA) with over-sampling, was designed to detect the impedance pulse wave at the wrist. By applying the self-balancing bridge, the basic impedance can be compensated as much as possible, and the low amplitude of impedance variation related to heart pulse can be obtained more easily. And the flexible conductive rubber electrode used in our experiment is human-friendly. Besides, the over-sampling method and high-speed DLIA are used to enhance the effective resolution of the existing data sampled by analog to digital converter. With the high-speed data process and simple circuit above, this proposed method has the potential in wrist-band wearable systems and it can satisfy quests of small volume and low power consumption.

  5. Development of a Stair-Step Multifrequency Synchronized Excitation Signal for Fast Bioimpedance Spectroscopy

    PubMed Central

    Bian, He; Du, Fangling; Sun, Qiang

    2014-01-01

    Wideband excitation signal with finite prominent harmonic components is desirable for fast bioimpedance spectroscopy (BIS) measurements. This work introduces a simple method to synthesize and realize a type of periodical stair-step multifrequency synchronized (MFS) signal. The Fourier series analysis shows that the p-order MFS signal f(p, t) has constant 81.06% energy distributed equally on its p  2nth primary harmonics. The synthesis principle is described firstly and then two examples of the 4-order and 5-order MFS signals, f(4, t) and f(5, t), are synthesized. The method to implement the MFS waveform based on a field-programmable gate array (FPGA) and a digital to analog converter (DAC) is also presented. Both the number and the frequencies of the expected primary harmonics can be adjusted as needed. An impedance measurement experiment on a RC three-element equivalent model is performed, and results show acceptable precision, which validates the feasibility of the MFS excitation. PMID:24701563

  6. Ventilation and Heart Rate Monitoring in Drivers using a Contactless Electrical Bioimpedance System

    NASA Astrophysics Data System (ADS)

    Macías, R.; García, M. A.; Ramos, J.; Bragós, R.; Fernández, M.

    2013-04-01

    Nowadays, the road safety is one of the most important priorities in the automotive industry. Many times, this safety is jeopardized because of driving under inappropriate states, e.g. drowsiness, drugs and/or alcohol. Therefore several systems for monitoring the behavior of subjects during driving are researched. In this paper, a device based on a contactless electrical bioimpedance system is shown. Using the four-wire technique, this system is capable of obtaining the heart rate and the ventilation of the driver through multiple textile electrodes. These textile electrodes are placed on the car seat and the steering wheel. Moreover, it is also reported several measurements done in a controlled environment, i.e. a test room where there are no artifacts due to the car vibrations or the road state. In the mentioned measurements, the system response can be observed depending on several parameters such as the placement of the electrodes or the number of clothing layers worn by the driver.

  7. Towards intraoperative surgical margin assessment and visualization using bioimpedance properties of the tissue

    NASA Astrophysics Data System (ADS)

    Khan, Shadab; Mahara, Aditya; Hyams, Elias S.; Schned, Alan; Halter, Ryan

    2015-03-01

    Prostate cancer (PCa) has a high 10-year recurrence rate, making PCa the second leading cause of cancer-specific mortality among men in the USA. PCa recurrences are often predicted by assessing the status of surgical margins (SM) with positive surgical margins (PSM) increasing the chances of biochemical recurrence by 2-4 times. To this end, an SM assessment system using Electrical Impedance Spectroscopy (EIS) was developed with a microendoscopic probe. This system measures the tissue bioimpedance over a range of frequencies (1 kHz to 1MHz), and computes a Composite Impedance Metric (CIM). CIM can be used to classify tissue as benign or cancerous. The system was used to collect the impedance spectra from excised prostates, which were obtained from men undergoing radical prostatectomy. The data revealed statistically significant (p<0.05) differences in the impedance properties of the benign and tumorous tissues, and between different tissue morphologies. To visualize the results of SM-assessment, a visualization tool using da Vinci stereo laparoscope is being developed. Together with the visualization tool, the EIS-based SM assessment system can be potentially used to intraoperatively classify tissues and display the results on the surgical console with a video feed of the surgical site, thereby augmenting a surgeon's view of the site and providing a potential solution to the intraoperative SM assessment needs.

  8. Process techniques for human thoracic electrical bio-impedance signal in remote healthcare systems.

    PubMed

    Rahman, Muhammad Zia Ur; Mirza, Shafi Shahsavar

    2016-06-01

    Analysis of thoracic electrical bio-impedance (TEB) facilitates heart stroke volume in sudden cardiac arrest. This Letter proposes several efficient and computationally simplified adaptive algorithms to display high-resolution TEB component. In a clinical environment, TEB signal encounters with various physiological and non-physiological phenomenon, which masks the tiny features that are important in identifying the intensity of the stroke. Moreover, computational complexity is an important parameter in a modern wearable healthcare monitoring tool. Hence, in this Letter, the authors propose a new signal conditioning technique for TEB enhancement in remote healthcare systems. For this, the authors have chosen higher order adaptive filter as a basic element in the process of TEB. To improve filtering capability, convergence speed, to reduce computational complexity of the signal conditioning technique, the authors apply data normalisation and clipping the data regressor. The proposed implementations are tested on real TEB signals. Finally, simulation results confirm that proposed regressor clipped normalised higher order filter is suitable for a practical healthcare system.

  9. Respiration monitoring by Electrical Bioimpedance (EBI) Technique in a group of healthy males. Calibration equations.

    NASA Astrophysics Data System (ADS)

    Balleza, M.; Vargas, M.; Kashina, S.; Huerta, M. R.; Delgadillo, I.; Moreno, G.

    2017-01-01

    Several research groups have proposed the electrical impedance tomography (EIT) in order to analyse lung ventilation. With the use of 16 electrodes, the EIT is capable to obtain a set of transversal section images of thorax. In previous works, we have obtained an alternating signal in terms of impedance corresponding to respiration from EIT images. Then, in order to transform those impedance changes into a measurable volume signal a set of calibration equations has been obtained. However, EIT technique is still expensive to attend outpatients in basics hospitals. For that reason, we propose the use of electrical bioimpedance (EBI) technique to monitor respiration behaviour. The aim of this study was to obtain a set of calibration equations to transform EBI impedance changes determined at 4 different frequencies into a measurable volume signal. In this study a group of 8 healthy males was assessed. From obtained results, a high mathematical adjustment in the group calibrations equations was evidenced. Then, the volume determinations obtained by EBI were compared with those obtained by our gold standard. Therefore, despite EBI does not provide a complete information about impedance vectors of lung compared with EIT, it is possible to monitor the respiration.

  10. A wireless multi-channel bioimpedance measurement system for personalized healthcare and lifestyle.

    PubMed

    Ramos, Javier; Ausín, José Luis; Lorido, Antonio Manuel; Redondo, Francisco; Duque-Carrillo, Juan Francisco

    2013-01-01

    Miniaturized, noninvasive, wearable sensors constitute a fundamental prerequisite for pervasive, predictive, and preventive healthcare systems. In this sense, this paper presents the design, realization, and evaluation of a wireless multi-channel measurement system based on a cost-effective high-performance integrated circuit for electrical bioimpedance (EBI) measurements in the frequency range from 1 kHz to 1 MHz. The resulting on-chip spectrometer provides high measuring EBI capabilities and together with a low-cost, commercially available radio frequency transceiver device. It provides reliable wireless communication, constitutes the basic node to build EBI wireless sensor networks (EBI-WSNs). The proposed EBI-WSN behaves as a high-performance wireless multi-channel EBI spectrometer, where the number of channels is completely scalable and independently configurable to satisfy specific measurement requirements of each individual. A prototype of the EBI node leads to a very small printed circuit board of approximately 8 cm2 including chip-antenna, which can operate several years on one 3-V coin cell battery and make it suitable for long-term preventive healthcare monitoring.

  11. Stroke damage detection using classification trees on electrical bioimpedance cerebral spectroscopy measurements.

    PubMed

    Atefi, Seyed Reza; Seoane, Fernando; Thorlin, Thorleif; Lindecrantz, Kaj

    2013-08-07

    After cancer and cardio-vascular disease, stroke is the third greatest cause of death worldwide. Given the limitations of the current imaging technologies used for stroke diagnosis, the need for portable non-invasive and less expensive diagnostic tools is crucial. Previous studies have suggested that electrical bioimpedance (EBI) measurements from the head might contain useful clinical information related to changes produced in the cerebral tissue after the onset of stroke. In this study, we recorded 720 EBI Spectroscopy (EBIS) measurements from two different head regions of 18 hemispheres of nine subjects. Three of these subjects had suffered a unilateral haemorrhagic stroke. A number of features based on structural and intrinsic frequency-dependent properties of the cerebral tissue were extracted. These features were then fed into a classification tree. The results show that a full classification of damaged and undamaged cerebral tissue was achieved after three hierarchical classification steps. Lastly, the performance of the classification tree was assessed using Leave-One-Out Cross Validation (LOO-CV). Despite the fact that the results of this study are limited to a small database, and the observations obtained must be verified further with a larger cohort of patients, these findings confirm that EBI measurements contain useful information for   assessing on the health of brain tissue after stroke and supports the hypothesis that classification features based on Cole parameters, spectral information and the geometry of EBIS measurements are useful to differentiate between healthy and stroke damaged brain tissue.

  12. Four probe architecture using high spatial resolution single multi-walled carbon nanotube electrodes for electrophysiology and bioimpedance monitoring of whole tissue

    NASA Astrophysics Data System (ADS)

    de Asis, Edward D.; Leung, Joseph; Wood, Sally; Nguyen, Cattien V.

    2010-03-01

    We report the application of a sensor with a multielectrode architecture consisting of four single multiwalled carbon nanotube electrodes (sMWNT electrodes) with nanotube tip diameters of approximately 30 nm to stimulation, recording, and bioimpedance characterization of whole muscle. Parallel pairs of sMWNT electrodes achieve improved stimulation efficiency from a reduction in electrode impedance and enhanced signal-to-noise ratio by detecting endogenic signals from a larger population of electrically active cells. The sensor with a four sMWNT electrode configuration can monitor changes in whole tissue bioimpedance.

  13. A Prospective Validation Study of Bioimpedance with Volume Displacement in Early-Stage Breast Cancer Patients at Risk for Lymphedema

    PubMed Central

    Barrio, Andrea V.; Eaton, Anne; Frazier, Thomas G.

    2015-01-01

    BACKGROUND Although volume displacement (VD) is considered the gold standard for diagnosing breast cancer (BC)-related lymphedema, it is inconvenient. We compared bioimpedance (L-Dex) and VD measurements in a prospective cohort of BC patients at risk for lymphedema. METHODS Between 2010–2014, 223 BC patients were enrolled. Following exclusions (n=37), 186 received baseline VD and L-Dex; follow-up measurements were performed at 3–6 month intervals for 3 years. At each visit, patients fit into one of three categories: normal (normal VD and L-Dex); abnormal L-Dex (L-Dex>10 or increase in 10 from baseline and normal VD); or lymphedema (relative arm volume difference of >10% by VD +/− abnormal L-Dex). Change in L-Dex was plotted against change in VD; correlation was assessed using Pearson correlation. RESULTS At a median follow-up of 18.2mos, 152 patients were normal; 25 had an abnormal L-Dex; and 9 developed lymphedema without a prior L-Dex abnormality. Of 25 abnormal L-Dex patients, 4 progressed to lymphedema for a total of 13 patients with lymphedema. Evaluating all time points, 186 patients had 829 follow-up measurements. Sensitivity and specificity of L-Dex compared to VD were 75% and 93%, respectively. There was no correlation between change in VD and change in L-Dex at 3mos (R=0.31) or 6mos (R=0.21). CONCLUSIONS VD and bioimpedance demonstrated poor correlation with inconsistent overlap of measurements considered abnormal. Of patients with an abnormal L-Dex, few progressed to lymphedema; most with lymphedema did not have a prior L-Dex abnormality. Further studies are needed to understand the clinical significance of bioimpedance. PMID:26085222

  14. Implementation of new dry electrodes and comparison with conventional Ag/AgCl electrodes for whole body electrical bioimpedance application.

    PubMed

    Dassonville, Y; Barthod, C; Passard, M

    2015-01-01

    Reusable electrodes, when embedded into devices, can provide new ways of physiological measurements, and improve the usability and comfort of monitoring systems using whole body electrical bioimpedance in the areas of medical, nutrition and sports. However, good electrical and mechanical contacts between electrode and skin are very important, as it defines the quality of the signal, requiring generally the use of consumable. This paper introduces innovative dry electrodes and compares their electrical behavior with those of a traditional Ag/AgCl electrolytic one. Thanks to the campaigns of measurements involving Caucasian healthy volunteers, three designs of experiments are conducted to lead to choose the optimized set: material, supply, using conditions.

  15. Prediction of body fat percentage from skinfold and bio-impedance measurements in Indian school children

    PubMed Central

    Kehoe, Sarah H.; Krishnaveni, Ghattu V.; Lubree, Himangi G.; Wills, Andrew K.; Guntupalli, Aravinda M.; Veena, Sargoor R.; Bhat, Dattatray S.; Kishore, Ravi; Fall, Caroline H.D.; Yajnik, Chittaranjan S.; Kurpad, Anura

    2011-01-01

    Background Few equations for calculating body fat percentage (BF%) from field methods have been developed in South Asian children. Objective To assess agreement between BF% derived from primary reference methods and that from skinfold equations and bio-impedance analysis (BIA) in Indian children. Methods We measured BF% in two groups of Indian children. In Pune, 570 rural children aged 6-8 years underwent dual-energy X-ray absorptiometry (DXA) scans. In Mysore 18O was administered to 59 urban children aged 7-9 years. We conducted BIA at 50kHz and anthropometry including subscapular and triceps skinfold thicknesses. We used the published equations of Wickramasinghe, Shaikh, Slaughter and Dezenburg to calculate BF% from anthropometric data and the manufacturer’s equation for BIA measurements. We assessed agreement with values derived from DXA and DLW using Bland Altman analysis. Results Children were light and thin compared to international standards. There was poor agreement between the reference BF% values and those from all equations. Assumptions for Bland Altman analysis were not met for Wickramasinghe, Shaikh and Slaughter equations. The Dezenberg equations under-predicted BF% for most children (mean difference in Pune −13.4, LOA −22.7, −4.0 and in Mysore −7.9, LOA −13.7 and −2.2). The mean bias for the BIA equation in Pune was +5.0% and in Mysore +1.95% and the LOA were wide; −5.0, 15.0 and −7.8, 11.7 respectively. Conclusions Currently available skinfold equations do not accurately predict BF% in Indian children. We recommend development of BIA equations in this population using a 4-compartment model. PMID:21731039

  16. Assessment of degree of hydration in dialysis patients using whole body and calf bioimpedance analysis

    NASA Astrophysics Data System (ADS)

    Zhu, F.; Kotanko, P.; Handelman, G. J.; Raimann, J.; Liu, L.; Carter, M.; Kuhlmann, M. K.; Siebert, E.; Leonard, E. F.; Levin, N. W.

    2010-04-01

    Prescription of an appropriate post hemodialysis (HD) dialysis target weight requires accurate evaluation of the degree of hydration. The aim of this study was to investigate whether a state of normal hydration as defined by calf bioimpedance spectroscopy (cBIS) could be characterized in HD and normal subjects (NS). cBIS was performed in 62 NS (33 m/29 f) and 30 HD patients (16 m /14 f) pre- and post-dialysis to measure extracellular resistance. Normalized calf resistivity at 5 kHz (ρN,5) was defined as resistivity divided by body mass index. Measurements were made at baseline (BL) and at a state of normal hydration (NH) established following the progressive reduction of post-HD weight over successive dialysis treatments until the ρN,5 was in the range of NS. Blood pressures were measured pre- and post-HD treatment. ρN,5 in males and females differed significantly in NS (20.5±1.99 vs 21.7±2.6 10-2 Ωm3/kg, p>0.05). In patients, ρN,5 notably increased and reached NH range due to progressive decrease in body weight, and systolic blood pressure (SBP) significantly decreased pre- and post-HD between BL and NBH respectively. This establishes the use of ρN,5 as a new comparator allowing the clinician to incrementally monitor the effect of removal of extracellular fluid from patients over a course of dialysis treatments.

  17. Use of Bioimpedance to Assess Changes in Hemodynamics During Acute Administration of CPAP

    PubMed Central

    Digby, Genevieve C.; Driver, Helen S.; Fitzpatrick, Michael; Ropchan, Glorianne; Parker, Christopher M.

    2011-01-01

    Background Attempts to investigate the mechanisms by which continuous positive airway pressure (CPAP) therapy improves heart function in patients with obstructive sleep apnea (OSA) have been limited by the lack of non-invasive methods to assess cardiac performance. We used transthoracic electrical bioimpedance (TEB) to assess acute hemodynamic changes including heart rate (HR), stroke volume (SV), cardiac output (CO) and cardiac index (CI) during PAP titration in (1) post-operative cardiac surgery patients, (2) patients with severe OSA, and (3) normal healthy volunteers. Methods Post-operative cardiac surgery patients were studied via TEB and pulmonary artery catheter (PAC) during acute titration of positive end-expiratory pressure (PEEP) while mechanically ventilated. Patients with severe OSA were studied non-invasively by TEB during acute CPAP titration in supine stage 2 sleep, and normal subjects while awake and recumbent. Results In post-operative cardiac surgery patients (n = 3), increasing PEEP to 18 cmH2O significantly reduced SV and CI relative to baseline. There was no difference between TEB and PAC in terms of ability to assess variations in hemodynamic parameters. In patients with severe OSA (n = 3), CPAP titration to optimal pressure to alleviate obstructive apneas reduced HR, SV, CO and CI significantly compared to without CPAP. In three healthy subjects, maximal tolerated CPAP reduced SV and CO significantly compared to baseline. Conclusions Acute administration of CPAP causes a decrease in CO and CI, apparently a consequence of a reduction in SV. TEB appears to be an accurate and reproducible non-invasive method of detecting changes in hemodynamics.

  18. Theoretical estimates of equilibrium sulfur isotope effects in aqueous sulfur systems: Highlighting the role of isomers in the sulfite and sulfoxylate systems

    NASA Astrophysics Data System (ADS)

    Eldridge, D. L.; Guo, W.; Farquhar, J.

    2016-12-01

    We present theoretical calculations for all three isotope ratios of sulfur (33S/32S, 34S/32S, 36S/32S) at the B3LYP/6-31+G(d,p) level of theory for aqueous sulfur compounds modeled in 30-40H2O clusters spanning the range of sulfur oxidation state (Sn, n = -2 to +6) for estimating equilibrium fractionation factors in aqueous systems. Computed 34β values based on major isotope (34S/32S) reduced partition function ratios (RPFRs) scale to a first order with sulfur oxidation state and coordination, where 34β generally increase with higher oxidation state and increasing coordination of the sulfur atom. Exponents defining mass dependent relationships based on β values (x/34κ = ln(xβ)/ln(34β), x = 33 or 36) conform to tight ranges over a wide range of temperature for all aqueous compounds (33/34κ ≈ 0.5148-0.5159, 36/34κ ≈ 1.89-1.90 from T ⩾ 0 °C). The exponents converge near a singular value for all compounds at the high temperature limit (33/34κT→∞ = 0.51587 ± 0.00003 and 36/34κT→∞ = 1.8905 ± 0.0002; 1 s.d. of all computed compounds), and typically follow trends based on oxidation state and coordination similar to those seen in 34β values at lower temperatures. Theoretical equilibrium fractionation factors computed from these β-values are compared to experimental constraints for HSO3-T(aq)/SO2(g, aq), SO2(aq)/SO2(g), H2S(aq)/H2S(g), H2S(aq)/HS-(aq), SO42-(aq)/H2ST(aq), S2O32-(aq) (intramolecular), and S2O32-(aq)/H2ST(aq), and generally agree within a reasonable estimation of uncertainties. We make predictions of fractionation factors where other constraints are unavailable. Isotope partitioning of the isomers of protonated compounds in the sulfite and sulfoxylate systems depend strongly on whether protons are bound to either sulfur or oxygen atoms. The magnitude of the HSO3-T/SO32- major isotope (34S/32S) fractionation factor is predicted to increase with temperature from 0 to 70 °C due to the combined effects of the large magnitude (HS)O3

  19. Theoretical estimation of quench occurrence and propagation based on generalized thermoelasticity for LTS/HTS tapes triggered by a spot heater

    NASA Astrophysics Data System (ADS)

    Tong, Yujin; Guan, Mingzhi; Wang, Xingzhe

    2017-04-01

    The present study deals with the thermal characteristics and mechanical behaviors of low/high temperature superconducting (LTS/HTS) composite tapes during quench processes triggered by a spot heater. Based on the generalized thermoelastic theory, a dynamic thermoelastic model with a relaxation time is developed which takes into account the temperature dependence and finite speed of heat propagation for the superconducting tapes under cryogenic condition. The analyses were performed using the finite element method to solve the coupled differential equations of dynamic heat conduction and elastic equilibrium. The results show that the thermoelastic behaviors exhibit a strong relevance to quench characteristics of the superconductors. As a quench occurs, the thermoelastic strain-rate has an obvious jumping variation with the instant of time of its peak being fortunately coincident with the time at which the critical temperature is reached. Such a jumping change of strain-rate could be a way of estimation and detection of quench occurrence, and the theoretical predictions coincide with the existing experimental observations on thermoelastic strain-rate in LTS magnets. For a HTS tape, the thermoelastic strain-rate or temperature-rate variation and a small jump also are illustrated as the quench occurrence is determined. Additionally, the normal zone propagation velocities for the LTS/HTS tapes are predicted by the critical temperature and thermoelastic strain-rate to show quite good agreements with the results evaluated by Wilson’s formula for a LTS tape or the experimental measurements for a HTS tape. The influences of the relaxation time of heat conduction and thermoelastic coupling on the thermal distribution and strain profile are also discussed in details.

  20. Wavelet-based multiscale analysis of bioimpedance data measured by electric cell-substrate impedance sensing for classification of cancerous and normal cells.

    PubMed

    Das, Debanjan; Shiladitya, Kumar; Biswas, Karabi; Dutta, Pranab Kumar; Parekh, Aditya; Mandal, Mahitosh; Das, Soumen

    2015-12-01

    The paper presents a study to differentiate normal and cancerous cells using label-free bioimpedance signal measured by electric cell-substrate impedance sensing. The real-time-measured bioimpedance data of human breast cancer cells and human epithelial normal cells employs fluctuations of impedance value due to cellular micromotions resulting from dynamic structural rearrangement of membrane protrusions under nonagitated condition. Here, a wavelet-based multiscale quantitative analysis technique has been applied to analyze the fluctuations in bioimpedance. The study demonstrates a method to classify cancerous and normal cells from the signature of their impedance fluctuations. The fluctuations associated with cellular micromotion are quantified in terms of cellular energy, cellular power dissipation, and cellular moments. The cellular energy and power dissipation are found higher for cancerous cells associated with higher micromotions in cancer cells. The initial study suggests that proposed wavelet-based quantitative technique promises to be an effective method to analyze real-time bioimpedance signal for distinguishing cancer and normal cells.

  1. Wavelet-based multiscale analysis of bioimpedance data measured by electric cell-substrate impedance sensing for classification of cancerous and normal cells

    NASA Astrophysics Data System (ADS)

    Das, Debanjan; Shiladitya, Kumar; Biswas, Karabi; Dutta, Pranab Kumar; Parekh, Aditya; Mandal, Mahitosh; Das, Soumen

    2015-12-01

    The paper presents a study to differentiate normal and cancerous cells using label-free bioimpedance signal measured by electric cell-substrate impedance sensing. The real-time-measured bioimpedance data of human breast cancer cells and human epithelial normal cells employs fluctuations of impedance value due to cellular micromotions resulting from dynamic structural rearrangement of membrane protrusions under nonagitated condition. Here, a wavelet-based multiscale quantitative analysis technique has been applied to analyze the fluctuations in bioimpedance. The study demonstrates a method to classify cancerous and normal cells from the signature of their impedance fluctuations. The fluctuations associated with cellular micromotion are quantified in terms of cellular energy, cellular power dissipation, and cellular moments. The cellular energy and power dissipation are found higher for cancerous cells associated with higher micromotions in cancer cells. The initial study suggests that proposed wavelet-based quantitative technique promises to be an effective method to analyze real-time bioimpedance signal for distinguishing cancer and normal cells.

  2. Model-based correction of the influence of body position on continuous segmental and hand-to-foot bioimpedance measurements.

    PubMed

    Medrano, Guillermo; Eitner, Frank; Walter, Marian; Leonhardt, Steffen

    2010-06-01

    Bioimpedance spectroscopy (BIS) is suitable for continuous monitoring of body water content. The combination of body posture and time is a well-known source of error, which limits the accuracy and therapeutic validity of BIS measurements. This study evaluates a model-based correction as a possible solution. For this purpose, an 11-cylinder model representing body impedance distribution is used. Each cylinder contains a nonlinear two-pool model to describe fluid redistribution due to changing body position and its influence on segmental and hand-to-foot (HF) bioimpedance measurements. A model-based correction of segmental (thigh) and HF measurements (Xitron Hydra 4200) in nine healthy human subjects (following a sequence of 7 min supine, 20 min standing, 40 min supine) has been evaluated. The model-based compensation algorithm represents a compromise between accuracy and simplicity, and reduces the influence of changes in body position on the measured extracellular resistance and extracellular fluid by up to 75 and 70%, respectively.

  3. Effects of elevated vacuum on in-socket residual limb fluid volume: Case study results using bioimpedance analysis

    PubMed Central

    Sanders, JE; Harrison, DS; Myers, TR; Allyn, KJ

    2015-01-01

    Bioimpedance analysis was used to measure residual limb fluid volume on seven trans-tibial amputee subjects using elevated vacuum sockets and non-elevated vacuum sockets. Fluid volume changes were assessed during sessions with the subjects sitting, standing, and walking. In general, fluid volume losses during 3 or 5 min walks and losses over the course of the 30-min test session were less for elevated vacuum than for suction. A number of variables including the time of day data were collected, soft tissue consistency, socket-to-limb size differences and shape differences, and subject health may have affected the results and had an equivalent or greater impact on limb fluid volume compared with elevated vacuum. Researchers should well consider these variables in study design of future investigations on the effects of elevated vacuum on residual limb volume. PMID:22234667

  4. PREFACE: XV International Conference on Electrical Bio-Impedance (ICEBI) & XIV Conference on Electrical Impedance Tomography (EIT)

    NASA Astrophysics Data System (ADS)

    Pliquett, Uwe

    2013-04-01

    Over recent years advanced measurement methods have facilitated outstanding achievements not only in medical instrumentation but also in biotechnology. Impedance measurement is a simple and innocuous way to characterize materials. For more than 40 years biological materials, most of them based on cells, have been characterized by means of electrical impedance for quality control of agricultural products, monitoring of biotechnological or food processes or in health care. Although the list of possible applications is long, very few applications successfully entered the market before the turn of the century. This was, on the one hand, due to the low specificity of electrical impedance with respect to other material properties because it is influenced by multiple factors. On the other hand, equipment and methods for many potential applications were not available. With the appearance of microcontrollers that could be easily integrated in applications at the beginning of the 1980s, impedance measurement advanced as a valuable tool in process optimization and lab automation. However, established methods and data processing were mostly used in a new environment. This has changed significantly during the last 10 years with a dramatic growth of the market for medical instrumentation and also for biotechnological applications. Today, advanced process monitoring and control require fast and highly parallel electrical characterization which in turn yields incredible data volumes that must be handled in real time. Many newer developments require miniaturized but precise sensing methods which is one of the main parts of Lab-on-Chip technology. Moreover, biosensors increasingly use impedometric transducers, which are not compatible with the large expensive measurement devices that are common in the laboratory environment. Following the achievements in the field of bioimpedance measurement, we will now witness a dramatic development of new electrode structures and electronics

  5. Novel monitoring method for the management of heart failure: combined measurement of body weight and bioimpedance index of body fat percentage.

    PubMed

    Kataoka, Hajime

    2009-11-01

    Although body weight scales are most commonly used to evaluate body fluid status during follow-up of definite heart failure (HF) patients, bioimpedance measurement methods have become increasingly available in the clinical setting. These monitoring methods, however, are typically used separately to evaluate body fluid status in HF patients. Kataoka developed a novel method for monitoring HF patients using a digital weight scale that incorporated a bioelectrical impedance analyzer. This method combines the well-known advantages of body weighing with a refined bioimpedance technique to monitor HF status and provides valid information regarding a change in a patient's body fluid status during follow-up for HF, such as predominant fluid versus fat weight gain or loss. This special report describes examples of the practical use of this method for monitoring and treating definite HF patients.

  6. Mean Expected Error in Prediction of Total Body Water: A True Accuracy Comparison between Bioimpedance Spectroscopy and Single Frequency Regression Equations

    PubMed Central

    Abtahi, Shirin; Abtahi, Farhad; Ellegård, Lars; Johannsson, Gudmundur; Bosaeus, Ingvar

    2015-01-01

    For several decades electrical bioimpedance (EBI) has been used to assess body fluid distribution and body composition. Despite the development of several different approaches for assessing total body water (TBW), it remains uncertain whether bioimpedance spectroscopic (BIS) approaches are more accurate than single frequency regression equations. The main objective of this study was to answer this question by calculating the expected accuracy of a single measurement for different EBI methods. The results of this study showed that all methods produced similarly high correlation and concordance coefficients, indicating good accuracy as a method. Even the limits of agreement produced from the Bland-Altman analysis indicated that the performance of single frequency, Sun's prediction equations, at population level was close to the performance of both BIS methods; however, when comparing the Mean Absolute Percentage Error value between the single frequency prediction equations and the BIS methods, a significant difference was obtained, indicating slightly better accuracy for the BIS methods. Despite the higher accuracy of BIS methods over 50 kHz prediction equations at both population and individual level, the magnitude of the improvement was small. Such slight improvement in accuracy of BIS methods is suggested insufficient to warrant their clinical use where the most accurate predictions of TBW are required, for example, when assessing over-fluidic status on dialysis. To reach expected errors below 4-5%, novel and individualized approaches must be developed to improve the accuracy of bioimpedance-based methods for the advent of innovative personalized health monitoring applications. PMID:26137489

  7. Detection of a Moving Gas Source and Estimation of its Concentration Field with a Sensing Aerial Vehicle Integration of Theoretical Controls and Computational Fluids

    DTIC Science & Technology

    2016-07-21

    address the convective and diffusive scales effectively. The grid will be constructed based on length-scales obtained through the state estimator and...chemically inert species, i.e. the net amount of material convected out of the volume element must be balanced only by an equivalent amount of material...Nonlinear Conservation Laws and Convection -Diffusion Equations,” Journal of Computational Physics, 2000, Vol. 160, No. 1, Elsevier, pp. 241-282. [69

  8. An information-theoretic approach to estimating the composite genetic effects contributing to variation among generation means: Moving beyond the joint-scaling test for line cross analysis.

    PubMed

    Blackmon, Heath; Demuth, Jeffery P

    2016-02-01

    The pace and direction of evolution in response to selection, drift, and mutation are governed by the genetic architecture that underlies trait variation. Consequently, much of evolutionary theory is predicated on assumptions about whether genes can be considered to act in isolation, or in the context of their genetic background. Evolutionary biologists have disagreed, sometimes heatedly, over which assumptions best describe evolution in nature. Methods for estimating genetic architectures that favor simpler (i.e., additive) models contribute to this debate. Here we address one important source of bias, model selection in line cross analysis (LCA). LCA estimates genetic parameters conditional on the best model chosen from a vast model space using relatively few line means. Current LCA approaches often favor simple models and ignore uncertainty in model choice. To address these issues we introduce Software for Analysis of Genetic Architecture (SAGA), which comprehensively assesses the potential model space, quantifies model selection uncertainty, and uses model weighted averaging to accurately estimate composite genetic effects. Using simulated data and previously published LCA studies, we demonstrate the utility of SAGA to more accurately define the components of complex genetic architectures, and show that traditional approaches have underestimated the importance of epistasis.

  9. Continuous monitoring of plasma, interstitial, and intracellular fluid volumes in dialyzed patients by bioimpedance and hematocrit measurements.

    PubMed

    Jaffrin, Michel Y; Fenech, Marianne; de Fremont, Jean-François; Tolani, Michel

    2002-01-01

    Bioimpedance spectroscopy (BIS) permits evaluation of extra- and intracellular fluid volumes in patients. We wished to examine whether this technique, used in combination with hematocrit measurement, can reliably monitor fluid transfers during dialysis. Ankle to wrist BIS measurements were collected during 21 dialysis runs while hematocrit was continuously monitored in the blood line by an optical device. Extracellular (ECW) and intracellular (ICW) water volumes were calculated using Hanai's electrical model of suspensions. Plasma volume variations were calculated from hematocrit, and changes in interstitial volume were calculated as the difference between ECW and plasma volume changes. Because accuracy of ICW was too low, changes in ICW were calculated as the difference between ultrafiltered volume and ECW changes. Total body water (TBW) volumes calculated pre- and postdialysis were, respectively, 3.25+/-3.2 and 1.95+/-2.5 liters lower on average than TBW given by Watson et al.'s correlation. Average decreases in fluid compartments expressed as percentage of ultrafiltered volume were as follows: plasma, 18%; interstitial, 28%, and ICW, 54%. When the ultrafiltered volume was increased in a patient in successive runs, the relative contributions of ICW and interstitial fluid were augmented so as to reduce the relative drop in plasma volume.

  10. Simulating Non-Specific Influences of Body Posture and Temperature on Thigh-Bioimpedance Spectroscopy during Continuous Monitoring Applications

    NASA Astrophysics Data System (ADS)

    Ismail, A. H.; Leonhardt, S.

    2013-04-01

    Application of bioimpedance spectroscopy (BIS) for continuous monitoring of body fluid volumes is gaining considerable importance in personal health care. Unless laboratory conditions are applied, both whole-body or segmental BIS configurations are subject to nonspecific influences (e.g. temperature and change in body position) reducing the method's accuracy and reproducibility. In this work, a two-compartment mathematical model, which describes the thigh segment, has been adapted to simulate fluid and solute kinetics during change in body position or variation in skin temperature. The model is an improved version of our previous one offering a good tradeoff between accuracy and simplicity. It represents the kinetics of fluid redistribution, sodium-, potassium-, and protein-concentrations based on simple equations to predict the time course of BIS variations. Validity of the model was verified in five subjects (following a sequence of 7 min supine, 20 min standing, and 40 min supine). The output of the model may reduce possible influences on BIS by up to 80%.

  11. Theoretical Issues

    SciTech Connect

    Marc Vanderhaeghen

    2007-04-01

    The theoretical issues in the interpretation of the precision measurements of the nucleon-to-Delta transition by means of electromagnetic probes are highlighted. The results of these measurements are confronted with the state-of-the-art calculations based on chiral effective-field theories (EFT), lattice QCD, large-Nc relations, perturbative QCD, and QCD-inspired models. The link of the nucleon-to-Delta form factors to generalized parton distributions (GPDs) is also discussed.

  12. Theoretical estimation of mesogenic characteristics of 4-methyl (2‧-hydroxy,4‧-n-hexadecyloxy) azobenzene - a nematic liquid crystal

    NASA Astrophysics Data System (ADS)

    Gaurav, Pankaj Kumar; Roychoudhury, Mihir

    2014-03-01

    The compound 4-methyl (2‧-hydroxy,4‧-n-hexadecyloxy) azobenzene was synthesized by Prajapati and co-workers (Mol. Cryst. Liq. Cryst. 369 (2001), pp. 37-46). Subsequent experiments (D. Pal, [PhD thesis], University of Lucknow, Lucknow, India, 2007) confirmed that the compound exists in nematic phase for a small range of temperature (72°C-80°C). In the present work, optimization of molecular geometry has been carried out by employing the Gaussian 03 suit of programs without any constraint using density functional B3LYP along with 6-31G** basis set and checked for imaginary frequencies. A detailed investigation on intermolecular interaction energy at various interacting configurations has been carried out. In order to study the mesogenic characteristics of the molecule, an attempt has been made to estimate the variation of order parameter with respect to the change in temperature as well as degrees of freedom. These studies will be helpful to understanding the mesogenic character of any molecule prior to synthesis and promises future application in molecular engineering.

  13. Theoretical geology

    NASA Astrophysics Data System (ADS)

    Mikeš, Daniel

    2010-05-01

    Theoretical geology Present day geology is mostly empirical of nature. I claim that geology is by nature complex and that the empirical approach is bound to fail. Let's consider the input to be the set of ambient conditions and the output to be the sedimentary rock record. I claim that the output can only be deduced from the input if the relation from input to output be known. The fundamental question is therefore the following: Can one predict the output from the input or can one predict the behaviour of a sedimentary system? If one can, than the empirical/deductive method has changes, if one can't than that method is bound to fail. The fundamental problem to solve is therefore the following: How to predict the behaviour of a sedimentary system? It is interesting to observe that this question is never asked and many a study is conducted by the empirical/deductive method; it seems that the empirical method has been accepted as being appropriate without question. It is, however, easy to argument that a sedimentary system is by nature complex and that several input parameters vary at the same time and that they can create similar output in the rock record. It follows trivially from these first principles that in such a case the deductive solution cannot be unique. At the same time several geological methods depart precisely from the assumption, that one particular variable is the dictator/driver and that the others are constant, even though the data do not support such an assumption. The method of "sequence stratigraphy" is a typical example of such a dogma. It can be easily argued that all the interpretation resulting from a method that is built on uncertain or wrong assumptions is erroneous. Still, this method has survived for many years, nonwithstanding all the critics it has received. This is just one example of the present day geological world and is not unique. Even the alternative methods criticising sequence stratigraphy actually depart from the same

  14. Theoretical Mathematics

    NASA Astrophysics Data System (ADS)

    Stöltzner, Michael

    Answering to the double-faced influence of string theory on mathematical practice and rigour, the mathematical physicists Arthur Jaffe and Frank Quinn have contemplated the idea that there exists a `theoretical' mathematics (alongside `theoretical' physics) whose basic structures and results still require independent corroboration by mathematical proof. In this paper, I shall take the Jaffe-Quinn debate mainly as a problem of mathematical ontology and analyse it against the backdrop of two philosophical views that are appreciative towards informal mathematical development and conjectural results: Lakatos's methodology of proofs and refutations and John von Neumann's opportunistic reading of Hilbert's axiomatic method. The comparison of both approaches shows that mitigating Lakatos's falsificationism makes his insights about mathematical quasi-ontology more relevant to 20th century mathematics in which new structures are introduced by axiomatisation and not necessarily motivated by informal ancestors. The final section discusses the consequences of string theorists' claim to finality for the theory's mathematical make-up. I argue that ontological reductionism as advocated by particle physicists and the quest for mathematically deeper axioms do not necessarily lead to identical results.

  15. Early Indication of Decompensated Heart Failure in Patients on Home-Telemonitoring: A Comparison of Prediction Algorithms Based on Daily Weight and Noninvasive Transthoracic Bio-impedance

    PubMed Central

    Bonomi, Alberto G; Goode, Kevin M; Reiter, Harald; Habetha, Joerg; Amft, Oliver; Cleland, John GF

    2016-01-01

    Background Heart Failure (HF) is a common reason for hospitalization. Admissions might be prevented by early detection of and intervention for decompensation. Conventionally, changes in weight, a possible measure of fluid accumulation, have been used to detect deterioration. Transthoracic impedance may be a more sensitive and accurate measure of fluid accumulation. Objective In this study, we review previously proposed predictive algorithms using body weight and noninvasive transthoracic bio-impedance (NITTI) to predict HF decompensations. Methods We monitored 91 patients with chronic HF for an average of 10 months using a weight scale and a wearable bio-impedance vest. Three algorithms were tested using either simple rule-of-thumb differences (RoT), moving averages (MACD), or cumulative sums (CUSUM). Results Algorithms using NITTI in the 2 weeks preceding decompensation predicted events (P<.001); however, using weight alone did not. Cross-validation showed that NITTI improved sensitivity of all algorithms tested and that trend algorithms provided the best performance for either measurement (Weight-MACD: 33%, NITTI-CUSUM: 60%) in contrast to the simpler rules-of-thumb (Weight-RoT: 20%, NITTI-RoT: 33%) as proposed in HF guidelines. Conclusions NITTI measurements decrease before decompensations, and combined with trend algorithms, improve the detection of HF decompensation over current guideline rules; however, many alerts are not associated with clinically overt decompensation. PMID:26892844

  16. Towards the development of a wearable Electrical Impedance Tomography system: A study about the suitability of a low power bioimpedance front-end.

    PubMed

    Menolotto, Matteo; Rossi, Stefano; Dario, Paolo; Della Torre, Luigi

    2015-01-01

    Wearable systems for remote monitoring of physiological parameter are ready to evolve towards wearable imaging systems. The Electrical Impedance Tomography (EIT) allows the non-invasive investigation of the internal body structure. The characteristics of this low-resolution and low-cost technique match perfectly with the concept of a wearable imaging device. On the other hand low power consumption, which is a mandatory requirement for wearable systems, is not usually discussed for standard EIT applications. In this work a previously developed low power architecture for a wearable bioimpedance sensor is applied to EIT acquisition and reconstruction, to evaluate the impact on the image of the limited signal to noise ratio (SNR), caused by low power design. Some anatomical models of the chest, with increasing geometric complexity, were developed, in order to evaluate and calibrate, through simulations, the parameters of the reconstruction algorithms provided by Electrical Impedance Diffuse Optical Reconstruction Software (EIDORS) project. The simulation results were compared with experimental measurements taken with our bioimpedance device on a phantom reproducing chest tissues properties. The comparison was both qualitative and quantitative through the application of suitable figures of merit; in this way the impact of the noise of the low power front-end on the image quality was assessed. The comparison between simulation and measurement results demonstrated that, despite the limited SNR, the device is accurate enough to be used for the development of an EIT based imaging wearable system.

  17. Number Theoretic Methods in Parameter Estimation

    DTIC Science & Technology

    2007-11-02

    Washington, DC 20503. 1. AGENCY USE ONLY (Leave blank) 2. REPORT DATE 5 /29/98 3. REPORT TYPE AND DATES COVERED Final- 1 April 97- 31 May 98 4. TITLE AND...American University Department of Mathematics and Statistics 4400 Massachusetts Ave., N.W. Washington, D.C. 20016- 8050 9. SPONSORING / MONITORING...AGENCY NAMES(S) AND ADDRESS(ES) Office of Naval Research 800 North Quincy Street Arlington, VA 22217-5660 5 . FUNDING NUMBERS (g) ONR Grant Number

  18. Impact of demographic, genetic, and bioimpedance factors on gestational weight gain and birth weight in a Romanian population

    PubMed Central

    Mărginean, Claudiu; Mărginean, Cristina Oana; Bănescu, Claudia; Meliţ, Lorena; Tripon, Florin; Iancu, Mihaela

    2016-01-01

    Abstract The present study had 2 objectives, first, to investigate possible relationships between increased gestational weight gain and demographic, clinical, paraclinical, genetic, and bioimpedance (BIA) characteristics of Romanian mothers, and second, to identify the influence of predictors (maternal and newborns characteristics) on our outcome birth weight (BW). We performed a cross-sectional study on 309 mothers and 309 newborns from Romania, divided into 2 groups: Group I—141 mothers with high gestational weight gain (GWG) and Group II—168 mothers with normal GWG, that is, control group. The groups were evaluated regarding demographic, anthropometric (body mass index [BMI], middle upper arm circumference, tricipital skinfold thickness, weight, height [H]), clinical, paraclinical, genetic (interleukin 6 [IL-6]: IL-6 -174G>C and IL-6 -572C>G gene polymorphisms), and BIA parameters. We noticed that fat mass (FM), muscle mass (MM), bone mass (BM), total body water (TBW), basal metabolism rate (BMR) and metabolic age (P < 0.001), anthropometric parameters (middle upper arm circumference, tricipital skinfold thickness; P < 0.001/P = 0.001) and hypertension (odds ratio = 4.65, 95% confidence interval: 1.27–17.03) were higher in mothers with high GWG. BW was positively correlated with mothers’ FM (P < 0.001), TBW (P = 0.001), BMR (P = 0.02), while smoking was negatively correlated with BW (P = 0.04). Variant genotype (GG+GC) of the IL-6 -572C>G polymorphism was higher in the control group (P = 0.042). We observed that high GWG may be an important predictor factor for the afterward BW, being positively correlated with FM, TBW, BMR, metabolic age of the mothers, and negatively with the mother's smoking status. Variant genotype (GG+GC) of the IL-6 -572C>G gene polymorphism is a protector factor against obesity in mothers. All the variables considered explained 14.50% of the outcome variance. PMID:27399105

  19. Intracranial hemorrhage alters scalp potential distribution in bioimpedance cerebral monitoring: Preliminary results from FEM simulation on a realistic head model and human subjects

    PubMed Central

    Atefi, Seyed Reza; Seoane, Fernando; Kamalian, Shervin; Rosenthal, Eric S.; Lev, Michael H.; Bonmassar, Giorgio

    2016-01-01

    Purpose: Current diagnostic neuroimaging for detection of intracranial hemorrhage (ICH) is limited to fixed scanners requiring patient transport and extensive infrastructure support. ICH diagnosis would therefore benefit from a portable diagnostic technology, such as electrical bioimpedance (EBI). Through simulations and patient observation, the authors assessed the influence of unilateral ICH hematomas on quasisymmetric scalp potential distributions in order to establish the feasibility of EBI technology as a potential tool for early diagnosis. Methods: Finite element method (FEM) simulations and experimental left–right hemispheric scalp potential differences of healthy and damaged brains were compared with respect to the asymmetry caused by ICH lesions on quasisymmetric scalp potential distributions. In numerical simulations, this asymmetry was measured at 25 kHz and visualized on the scalp as the normalized potential difference between the healthy and ICH damaged models. Proof-of-concept simulations were extended in a pilot study of experimental scalp potential measurements recorded between 0 and 50 kHz with the authors’ custom-made bioimpedance spectrometer. Mean left–right scalp potential differences recorded from the frontal, central, and parietal brain regions of ten healthy control and six patients suffering from acute/subacute ICH were compared. The observed differences were measured at the 5% level of significance using the two-sample Welch t-test. Results: The 3D-anatomically accurate FEM simulations showed that the normalized scalp potential difference between the damaged and healthy brain models is zero everywhere on the head surface, except in the vicinity of the lesion, where it can vary up to 5%. The authors’ preliminary experimental results also confirmed that the left–right scalp potential difference in patients with ICH (e.g., 64 mV) is significantly larger than in healthy subjects (e.g., 20.8 mV; P < 0.05). Conclusions: Realistic

  20. Nutritional status evaluated by multi-frequency bioimpedance is not associated with quality of life or depressive symptoms in hemodialysis patients.

    PubMed

    Barros, Annerose; da Costa, Bartira E Pinheiro; Poli-de-Figueiredo, Carlos E; Antonello, Ivan C; d'Avila, Domingos O

    2011-02-01

    Hemodialysis therapy significantly impacts on patients' physical, psychological, and social performances. Such reduced quality of life depends on several factors, such as malnutrition, depression, and metabolic derangements. This study aims to evaluate the current nutritional status, quality of life and depressive symptoms, and determine the possible relationships with other risk factors for poor outcomes, in stable hemodialysis patients. This was a single-center, cross-sectional study that enrolled 59 adult patients undergoing hemodialysis. Laboratory tests that included high-sensitivity c-reactive protein (CRP), and quality of life and depressive symptom evaluation, as well as malnutrition-inflammation score, nutritional status and body composition (by direct segmental multi-frequency bioimpedance analysis) determinations were performed. Patients were classified as "underfat", "standard", "overfat", or "obese" by multi-frequency bioimpedance analysis. Seven patients were underfat, 19 standard, 19 overfat, and 14 obese. Triglyceride levels significantly differed between the underfat, standard, overfat, and obese groups (1.06 [0.98-1.98]; 1.47 [1.16-1.67]; 2.53 [1.17-3.13]; 2.12 [1.41-2.95] mmol/L, respectively; P=0.026), as did Kt/V between the underfat, overfat, and obese groups (1.49 ± 0.14; 1.23 ± 0.19; 1.19 ± 0.22; P=0.015 and P=0.006, respectively). Depressive symptoms, quality of life, and CRP and phosphate levels did not diverge among nutritional groups. Creatinine, albumin, and phosphate strongly correlated, as well as percent body fat, body mass index, and waist circumference (r=0.859 [P<0.001], and r=0.716 [P<0.001], respectively). Depressive symptoms and physical and psychological quality-of-life domains also strongly correlated (r(s) = -0.501 [P<0.001], r(s) = -0.597 [P<0.001], respectively). The majority of patients were overfat or obese and very few underfat. Inflammation was prevalent, overall. No association of nutritional status with

  1. Disagreement between standard transthoracic impedance cardiography and the automated transthoracic electrical bioimpedance method in estimating the cardiovascular responses to phenylephrine and isoprenaline in healthy man.

    PubMed Central

    De Mey, C; Enterling, D

    1993-01-01

    1. Impedance cardiography is a well-established noninvasive method to assess within-subject changes of cardiovascular function. We compared the standard approach (ZCG) which requires tedious signal analysis with an automated approach (TEB: NCCOM 3) with its own specific equipment, algorithms and equations in order to assess agreement of the method-specific measurements and calculations. 2. Ten healthy men were studied on two occasions with either ZCG or TEB, at rest and at the end of 5 min i.v.-infusions with 1 microgram min-1 isoprenaline and 100 micrograms min-1 phenylephrine. 3. There was good agreement for the method-independent changes (HR, SBP/DBP), but there were large differences for method-specific measurements: dZ/dtmax [TEB-ZCG] = -0.68, CI: -0.83 to -0.53 ohm s-1, PEP [TEB-ZCG] = -22.1, CI: -35.0 to -9.2 ms and QS2c [TEB-ZCG] = -16.5, CI: -32.4 to -0.6 ms and for the calculated stroke volume SV [TEB-ZCG] = 30.3, CI: 15.5 to 45.2 ml. The responses of dZ/dtmax and SV to isoprenaline and phenylephrine, although qualitatively similar, reached no quantitative agreement either. A substantial disagreement was evident for the STI responses to isoprenaline where TEB failed to detect the expected reduction of VETc and thus grossly underestimated the shortening of QS2c. 4. It is concluded that TEB-measurements and -calculations did not agree with standard ZCG, that the methods, albeit related, cannot be considered as interchangeable and that suspicion is justified that TEB might yield erroneous results under specific circumstances. PMID:8485014

  2. A current-excited triple-time-voltage oversampling method for bio-impedance model for cost-efficient circuit system.

    PubMed

    Yan Hong; Yong Wang; Wang Ling Goh; Yuan Gao; Lei Yao

    2015-08-01

    This paper presents a mathematic method and a cost-efficient circuit to measure the value of each component of the bio-impedance model at electrode-electrolyte interface. The proposed current excited triple-time-voltage oversampling (TTVO) method deduces the component values by solving triple simultaneous electric equation (TSEE) at different time nodes during a current excitation, which are the voltage functions of time. The proposed triple simultaneous electric equations (TSEEs) allows random selections of the time nodes, hence numerous solutions can be obtained during a single current excitation. Following that, the oversampling approach is engaged by averaging all solutions of multiple TSEEs acquired after a single current excitation, which increases the practical measurement accuracy through the improvement of the signal-to-noise ratio (SNR). In addition, a print circuit board (PCB) that consists a switched current exciter and an analog-to-digital converter (ADC) is designed for signal acquisition. This presents a great cost reduction when compared against other instrument-based measurement data reported [1]. Through testing, the measured values of this work is proven to be in superb agreements on the true component values of the electrode-electrolyte interface model. This work is most suited and also useful for biological and biomedical applications, to perform tasks such as stimulations, recordings, impedance characterizations, etc.

  3. Theoretical considerations in measurement of time discrepancies between input and myocardial time-signal intensity curves in estimates of regional myocardial perfusion with first-pass contrast-enhanced MRI.

    PubMed

    Natsume, Takahiro; Ishida, Masaki; Kitagawa, Kakuya; Nagata, Motonori; Sakuma, Hajime; Ichihara, Takashi

    2015-11-01

    The purpose of this study was to develop a method to determine time discrepancies between input and myocardial time-signal intensity (TSI) curves for accurate estimation of myocardial perfusion with first-pass contrast-enhanced MRI. Estimation of myocardial perfusion with contrast-enhanced MRI using kinetic models requires faithful recording of contrast content in the blood and myocardium. Typically, the arterial input function (AIF) is obtained by setting a region of interest in the left ventricular cavity. However, there is a small delay between the AIF and the myocardial curves, and such time discrepancies can lead to errors in flow estimation using Patlak plot analysis. In this study, the time discrepancies between the arterial TSI curve and the myocardial tissue TSI curve were estimated based on the compartment model. In the early phase after the arrival of the contrast agent in the myocardium, the relationship between rate constant K1 and the concentrations of Gd-DTPA contrast agent in the myocardium and arterial blood (LV blood) can be described by the equation K1={dCmyo(tpeak)/dt}/Ca(tpeak), where Cmyo(t) and Ca(t) are the relative concentrations of Gd-DTPA contrast agent in the myocardium and in the LV blood, respectively, and tpeak is the time corresponding to the peak of Ca(t). In the ideal case, the time corresponding to the maximum upslope of Cmyo(t), tmax, is equal to tpeak. In practice, however, there is a small difference in the arrival times of the contrast agent into the LV and into the myocardium. This difference was estimated to correspond to the difference between tpeak and tmax. The magnitudes of such time discrepancies and the effectiveness of the correction for these time discrepancies were measured in 18 subjects who underwent myocardial perfusion MRI under rest and stress conditions. The effects of the time discrepancies could be corrected effectively in the myocardial perfusion estimates.

  4. Comparison of multi-frequency bioimpedance with perometry for the early detection and intervention of lymphoedema after axillary node clearance for breast cancer.

    PubMed

    Bundred, Nigel J; Stockton, Charlotte; Keeley, Vaughan; Riches, Katie; Ashcroft, Linda; Evans, Abigail; Skene, Anthony; Purushotham, Arnie; Bramley, Maria; Hodgkiss, Tracey

    2015-05-01

    The importance of early detection of lymphoedema by arm volume measurements before surgery and repeated measurements after surgery in women undergoing axillary node clearance (ANC) in order to enable early intervention is recognised. A prospective multi-centre study was performed which studied the difference between multi-frequency bioimpedance electrical analysis (BIS) and perometer arm measurement in predicting the development of lymphoedema. Women undergoing ANC underwent pre-operative and regular post-operative measurements of arm volume by both methods. The primary endpoint is the incidence of lymphoedema (≥10 % arm volume increase compared to contralateral arm by perometer) at 2 and 5 years after ANC. The threshold for intervention in lymphoedema was also assessed. Out of 964 patients recruited, 612 had minimum 6 months follow-up data. Using 1-month post-operative measurements as baseline, perometer detected 31 patients with lymphoedema by 6 months (BIS detected 53). By 6 months, 89 % of those with no lymphoedema reported at least one symptom. There was moderate correlation between perometer and BIS at 3 months (r = 0.40) and 6 months (r = 0.60), with a sensitivity of 73 % and specificity of 84 %. Univariate and multivariate analyses revealed a threshold for early intervention of ≥5 to <10 % (p = 0.03). Threshold for early intervention to prevent progression to lymphoedema is ≥5 to <10 % but symptoms alone do not predict lymphoedema. The modest correlation between methods at 6 months indicates arm volume measurements remain gold standard, although longer term follow-up is required.

  5. Increased extracellular water measured by bioimpedance and by increased serum levels of atrial natriuretic peptide in RA patients-signs of volume overload.

    PubMed

    Straub, Rainer H; Ehrenstein, Boris; Günther, Florian; Rauch, Luise; Trendafilova, Nadezhda; Boschiero, Dario; Grifka, Joachim; Fleck, Martin

    2016-04-26

    The aim of the study is to investigate water compartments in patients with rheumatoid arthritis (RA). Acute inflammatory episodes such as infection stimulate water retention, chiefly implemented by the sympathetic nervous system (SNS) and hypothalamic-pituitary-adrenal (HPA) axis. This is an important compensatory mechanism due to expected water loss (sweating etc.). Since SNS and HPA axis are activated in RA, inflammation might be accompanied by water retention. Using bioimpedance analysis, body composition was investigated in 429 controls and 156 treatment-naïve RA patients between January 2008 and December 2014. A group of 34 RA patients was tested before and after 10 days of intensified therapy. Levels of pro-atrial natriuretic peptide (proANP) and expression of atrial natriuretic peptide in synovial tissue were investigated in 15 controls and 14 RA patients. Extracellular water was higher in RA patients than controls (mean ± SEM: 49.5 ± 0.3 vs. 36.7 ± 0.1, % of total body water, p < 0.0001). Plasma levels of proANP were higher in RA than controls. RA patients expressed ANP in synovial tissue, but synovial fluid levels and synovial tissue superfusate levels were much lower than plasma levels indicating systemic origin. Systolic/diastolic blood pressure was higher in RA patients than controls. Extracellular water levels did not change in RA patients despite 10 days of intensified treatment. This study demonstrates signs of intravascular overload in RA patients. Short-term intensification of anti-inflammatory therapy induced no change of a longer-lasting imprinting of water retention indicating the requirement of additional treatment. The study can direct attention to the area of volume overload.

  6. Experimental (FT-IR, NMR and UV) and theoretical (M06-2X and DFT) investigation, and frequency estimation analyses on (E)-3-(4-bromo-5-methylthiophen-2-yl)acrylonitrile

    NASA Astrophysics Data System (ADS)

    Sert, Yusuf; Balakit, Asim A.; Öztürk, Nuri; Ucun, Fatih; El-Hiti, Gamal A.

    2014-10-01

    The spectroscopic properties of (E)-3-(4-bromo-5-methylthiophen-2-yl)acrylonitrile have been investigated by FT-IR, UV, 1H and 13C NMR techniques. The theoretical vibrational frequencies and optimized geometric parameters (bond lengths and angles) have been calculated using density functional theory (DFT/B3LYP: Becke, 3-parameter, Lee-Yang-Parr) and DFT/M06-2X (the highly parameterized, empirical exchange correlation function) quantum chemical methods with 6-311++G(d,p) basis set by Gaussian 03 software, for the first time. The assignments of the vibrational frequencies have been carried out by potential energy distribution (PED) analysis by using VEDA 4 software. The theoretical optimized geometric parameters and vibrational frequencies were in good agreement with the corresponding experimental data, and with the results in the literature. 1H and 13C NMR chemical shifts were calculated by using the gauge-invariant atomic orbital (GIAO) method. The electronic properties, such as excitation energies, oscillator strength wavelengths were performed by B3LYP methods. In addition, the highest occupied molecular orbital (HOMO) and the lowest unoccupied molecular orbital (LUMO) energies and the other related molecular energy values have been calculated and depicted.

  7. Experimental (FT-IR, NMR and UV) and theoretical (M06-2X and DFT) investigation, and frequency estimation analyses on (E)-3-(4-bromo-5-methylthiophen-2-yl)acrylonitrile.

    PubMed

    Sert, Yusuf; Balakit, Asim A; Öztürk, Nuri; Ucun, Fatih; El-Hiti, Gamal A

    2014-10-15

    The spectroscopic properties of (E)-3-(4-bromo-5-methylthiophen-2-yl)acrylonitrile have been investigated by FT-IR, UV, (1)H and (13)C NMR techniques. The theoretical vibrational frequencies and optimized geometric parameters (bond lengths and angles) have been calculated using density functional theory (DFT/B3LYP: Becke, 3-parameter, Lee-Yang-Parr) and DFT/M06-2X (the highly parameterized, empirical exchange correlation function) quantum chemical methods with 6-311++G(d,p) basis set by Gaussian 03 software, for the first time. The assignments of the vibrational frequencies have been carried out by potential energy distribution (PED) analysis by using VEDA 4 software. The theoretical optimized geometric parameters and vibrational frequencies were in good agreement with the corresponding experimental data, and with the results in the literature. (1)H and (13)C NMR chemical shifts were calculated by using the gauge-invariant atomic orbital (GIAO) method. The electronic properties, such as excitation energies, oscillator strength wavelengths were performed by B3LYP methods. In addition, the highest occupied molecular orbital (HOMO) and the lowest unoccupied molecular orbital (LUMO) energies and the other related molecular energy values have been calculated and depicted.

  8. Bibliography for aircraft parameter estimation

    NASA Technical Reports Server (NTRS)

    Iliff, Kenneth W.; Maine, Richard E.

    1986-01-01

    An extensive bibliography in the field of aircraft parameter estimation has been compiled. This list contains definitive works related to most aircraft parameter estimation approaches. Theoretical studies as well as practical applications are included. Many of these publications are pertinent to subjects peripherally related to parameter estimation, such as aircraft maneuver design or instrumentation considerations.

  9. Information Theoretic Causal Coordination

    DTIC Science & Technology

    2013-09-12

    his 1969 paper, Clive Granger , British economist and Nobel laureate, proposed a statistical def- inition of causality between stochastic processes. It...showed that the directed infor- mation, an information theoretic quantity, quantifies Granger causality . We also explored a more pessimistic setup...Final Technical Report Project Title: Information Theoretic Causal Coordination AFOSR Award Number: AF FA9550-10-1-0345 Reporting Period: July 15

  10. Theoretical and computational chemistry.

    PubMed

    Meuwly, Markus

    2010-01-01

    Computer-based and theoretical approaches to chemical problems can provide atomistic understanding of complex processes at the molecular level. Examples ranging from rates of ligand-binding reactions in proteins to structural and energetic investigations of diastereomers relevant to organo-catalysis are discussed in the following. They highlight the range of application of theoretical and computational methods to current questions in chemical research.

  11. An intelligent control framework for robot-aided resistance training using hybrid system modeling and impedance estimation.

    PubMed

    Xu, Guozheng; Guo, Xiaobo; Zhai, Yan; Li, Huijun

    2015-08-01

    This study presents a novel therapy control method for robot-assisted resistance training using the hybrid system modeling technology and the estimated patient's bio-impedance changes. A new intelligent control framework based on hybrid system theory is developed, to automatically generate the desired resistive force and to make accommodating emergency behavior, when monitoring the changes of the impaired limb's muscle strength or the unpredictable safety-related occurrences during the execution of the training task. The impaired limb's muscle strength progress is online evaluated using its bio-damping and bio-stiffness estimation results. The proposed method is verified with a custom constructed therapeutic robot system featuring a Barrett WAM™ compliant manipulator. A typical inpatient stroke subject was recruited and enrolled in a ten-week resistance training program. Preliminary results show that the proposed therapeutic strategy can enhance the impaired limb's muscle strength and has practicability for robot-aided rehabilitation training.

  12. DXA, bioelectrical impedance, ultrasonography and biometry for the estimation of fat and lean mass in cats during weight loss

    PubMed Central

    2012-01-01

    Background Few equations have been developed in veterinary medicine compared to human medicine to predict body composition. The present study was done to evaluate the influence of weight loss on biometry (BIO), bioimpedance analysis (BIA) and ultrasonography (US) in cats, proposing equations to estimate fat (FM) and lean (LM) body mass, as compared to dual energy x-ray absorptiometry (DXA) as the referenced method. For this were used 16 gonadectomized obese cats (8 males and 8 females) in a weight loss program. DXA, BIO, BIA and US were performed in the obese state (T0; obese animals), after 10% of weight loss (T1) and after 20% of weight loss (T2). Stepwise regression was used to analyze the relationship between the dependent variables (FM, LM) determined by DXA and the independent variables obtained by BIO, BIA and US. The better models chosen were evaluated by a simple regression analysis and means predicted vs. determined by DXA were compared to verify the accuracy of the equations. Results The independent variables determined by BIO, BIA and US that best correlated (p < 0.005) with the dependent variables (FM and LM) were BW (body weight), TC (thoracic circumference), PC (pelvic circumference), R (resistance) and SFLT (subcutaneous fat layer thickness). Using Mallows’Cp statistics, p value and r2, 19 equations were selected (12 for FM, 7 for LM); however, only 7 equations accurately predicted FM and one LM of cats. Conclusions The equations with two variables are better to use because they are effective and will be an alternative method to estimate body composition in the clinical routine. For estimated lean mass the equations using body weight associated with biometrics measures can be proposed. For estimated fat mass the equations using body weight associated with bioimpedance analysis can be proposed. PMID:22781317

  13. [Once again: theoretical pathology].

    PubMed

    Bleyl, U

    2010-07-01

    Theoretical pathology refers to the attempt to reintroduce methodical approaches from the humanities, philosophical logic and "gestalt philosophy" into medical research and pathology. Diseases, in particular disease entities and more complex polypathogenetic mechanisms of disease, have a "gestalt quality" due to the significance of their pathophysiologic coherence: they have a "gestalt". The Research group Theoretical Pathology at the Academy of Science in Heidelberg are credited with having revitalized the philosophical notion of "gestalt" for morphological and pathological diagnostics. Gestalt means interrelated schemes of pathophysiological significance in the mind of the diagnostician. In pathology, additive and associative diagnostic are simply not possible without considering the notion of synthetic entities in Kant's logic.

  14. Estimating Large Numbers

    ERIC Educational Resources Information Center

    Landy, David; Silbert, Noah; Goldin, Aleah

    2013-01-01

    Despite their importance in public discourse, numbers in the range of 1 million to 1 trillion are notoriously difficult to understand. We examine magnitude estimation by adult Americans when placing large numbers on a number line and when qualitatively evaluating descriptions of imaginary geopolitical scenarios. Prior theoretical conceptions…

  15. A Theoretical Trombone

    ERIC Educational Resources Information Center

    LoPresto, Michael C.

    2014-01-01

    What follows is a description of a theoretical model designed to calculate the playing frequencies of the musical pitches produced by a trombone. The model is based on quantitative treatments that demonstrate the effects of the flaring bell and cup-shaped mouthpiece sections on these frequencies and can be used to calculate frequencies that…

  16. A theoretical trombone

    NASA Astrophysics Data System (ADS)

    LoPresto, Michael C.

    2014-09-01

    What follows is a description of a theoretical model designed to calculate the playing frequencies of the musical pitches produced by a trombone. The model is based on quantitative treatments that demonstrate the effects of the flaring bell and cup-shaped mouthpiece sections on these frequencies and can be used to calculate frequencies that compare well to both the desired frequencies of the musical pitches and those actually played on a real trombone.

  17. Theoretical Approaches to Nanoparticles

    NASA Astrophysics Data System (ADS)

    Kempa, Krzysztof

    Nanoparticles can be viewed as wave resonators. Involved waves are, for example, carrier waves, plasmon waves, polariton waves, etc. A few examples of successful theoretical treatments that follow this approach are given. In one, an effective medium theory of a nanoparticle composite is presented. In another, plasmon polaritonic solutions allow to extend concepts of radio technology, such as an antenna and a coaxial transmission line, to the visible frequency range.

  18. Theoretical Delay Time Distributions

    NASA Astrophysics Data System (ADS)

    Nelemans, Gijs; Toonen, Silvia; Bours, Madelon

    2013-01-01

    We briefly discuss the method of population synthesis to calculate theoretical delay time distributions of Type Ia supernova progenitors. We also compare the results of different research groups and conclude that, although one of the main differences in the results for single degenerate progenitors is the retention efficiency with which accreted hydrogen is added to the white dwarf core, this alone cannot explain all the differences.

  19. Panorama of theoretical physics

    NASA Astrophysics Data System (ADS)

    Mimouni, J.

    2012-06-01

    We shall start this panorama of theoretical physics by giving an overview of physics in general, this branch of knowledge that has been taken since the scientific revolution as the archetype of the scientific discipline. We shall then proceed in showing in what way theoretical physics from Newton to Maxwell, Einstein, Feynman and the like, in all modesty, could be considered as the ticking heart of physics. By its special mode of inquiry and its tantalizing successes, it has capturing the very spirit of the scientific method, and indeed it has been taken as a role model by other disciplines all the way from the "hard" ones to the social sciences. We shall then review how much we know today of the world of matter, both in term of its basic content and in the way it is structured. We will then present the dreams of today's theoretical physics as a way of penetrating into its psyche, discovering in this way its aspirations and longing in much the same way that a child's dreams tell us about his yearning and craving. Yet our understanding of matter has been going in the past decades through a crisis of sort. As a necessary antidote, we shall thus discuss the pitfalls of dreams pushed too far….

  20. Theoretical Developments in SUSY

    NASA Astrophysics Data System (ADS)

    Shifman, M.

    2009-01-01

    I am proud that I was personally acquainted with Julius Wess. We first met in 1999 when I was working on the Yuri Golfand Memorial Volume (The Many Faces of the Superworld, World Scientific, Singapore, 2000). I invited him to contribute, and he accepted this invitation with enthusiasm. After that, we met many times, mostly at various conferences in Germany and elsewhere. I was lucky to discuss with Julius questions of theoretical physics, and hear his recollections on how supersymmetry was born. In physics Julius was a visionary, who paved the way to generations of followers. In everyday life he was a kind and modest person, always ready to extend a helping hand to people who were in need of his help. I remember him telling me how concerned he was about the fate of theoretical physicists in Eastern Europe after the demise of communism. His ties with Israeli physicists bore a special character. I am honored by the opportunity to contribute an article to the Julius Wess Memorial Volume. I will review theoretical developments of the recent years in non-perturbative supersymmetry.

  1. Theoretical estimation of static charge fluctuation in amorphous silicon

    NASA Astrophysics Data System (ADS)

    Kugler, Sándor; Surján, Péter R.; Náray-Szabó, Gábor

    1988-05-01

    A quantum-chemical method has been developed to determine charge fluctuations in finite aperiodic clusters of amorphous silicon. Calculated atomic net charges are in a close linear relationship to bond-angle distortions involving first and second neighbors. Applying this relationship to a continuous-random-network model of 216 silicon atoms proposed by Wooten et al., we obtained 0.021 electron units for the rms deviation from charge neutrality.

  2. Numerical Estimation of Information Theoretic Measures for Large Data Sets

    DTIC Science & Technology

    2013-01-30

    TABLE OF CONTENTS Page Abstract iii  List of Illustrations vii  List of Tables ix  1. INTRODUCTION 1  2. EVALUATION OF MULTI-TARGET TRACKERS AND...This page intentionally left blank. 1 1. INTRODUCTION A problem that has plagued the tracking community for decades has been...JNIMIJMN EEE EEE   2 JNINIJM IJNIJMN EEE EE   2 IMIN IJNIJMIJMN EE EEE   2 IMJNIN IJMIJNIJMN EEE EEE   2 332  yxH

  3. Theoretical Astrophysics at Fermilab

    NASA Technical Reports Server (NTRS)

    2004-01-01

    The Theoretical Astrophysics Group works on a broad range of topics ranging from string theory to data analysis in the Sloan Digital Sky Survey. The group is motivated by the belief that a deep understanding of fundamental physics is necessary to explain a wide variety of phenomena in the universe. During the three years 2001-2003 of our previous NASA grant, over 120 papers were written; ten of our postdocs went on to faculty positions; and we hosted or organized many workshops and conferences. Kolb and collaborators focused on the early universe, in particular and models and ramifications of the theory of inflation. They also studied models with extra dimensions, new types of dark matter, and the second order effects of super-horizon perturbations. S tebbins, Frieman, Hui, and Dodelson worked on phenomenological cosmology, extracting cosmological constraints from surveys such as the Sloan Digital Sky Survey. They also worked on theoretical topics such as weak lensing, reionization, and dark energy. This work has proved important to a number of experimental groups [including those at Fermilab] planning future observations. In general, the work of the Theoretical Astrophysics Group has served as a catalyst for experimental projects at Fennilab. An example of this is the Joint Dark Energy Mission. Fennilab is now a member of SNAP, and much of the work done here is by people formerly working on the accelerator. We have created an environment where many of these people made transition from physics to astronomy. We also worked on many other topics related to NASA s focus: cosmic rays, dark matter, the Sunyaev-Zel dovich effect, the galaxy distribution in the universe, and the Lyman alpha forest. The group organized and hosted a number of conferences and workshop over the years covered by the grant. Among them were:

  4. M dwarfs: Theoretical work

    NASA Technical Reports Server (NTRS)

    Mullan, Dermott J.

    1987-01-01

    Theoretical work on the atmospheres of M dwarfs has progressed along lines parallel to those followed in the study of other classes of stars. Such models have become increasingly sophisticated as improvements in opacities, in the equation of state, and in the treatment of convection were incorporated during the last 15 to 20 years. As a result, spectrophotometric data on M dwarfs can now be fitted rather well by current models. The various attempts at modeling M dwarf photospheres in purely thermal terms are summarized. Some extensions of these models to include the effects of microturbulence and magnetic inhomogeneities are presented.

  5. Institute for Theoretical Physics

    SciTech Connect

    Giddings, S.B.; Ooguri, H.; Peet, A.W.; Schwarz, J.H.

    1998-06-01

    String theory is the only serious candidate for a unified description of all known fundamental particles and interactions, including gravity, in a single theoretical framework. Over the past two years, activity in this subject has grown rapidly, thanks to dramatic advances in understanding the dynamics of supersymmetric field theories and string theories. The cornerstone of these new developments is the discovery of duality which relates apparently different string theories and transforms difficult strongly coupled problems of one theory into weakly coupled problems of another theory.

  6. Theoretical Optics: An Introduction

    NASA Astrophysics Data System (ADS)

    Römer, Hartmann

    2005-02-01

    Starting from basic electrodynamics, this volume provides a solid, yet concise introduction to theoretical optics, containing topics such as nonlinear optics, light-matter interaction, and modern topics in quantum optics, including entanglement, cryptography, and quantum computation. The author, with many years of experience in teaching and research, goes way beyond the scope of traditional lectures, enabling readers to keep up with the current state of knowledge. Both content and presentation make it essential reading for graduate and phD students as well as a valuable reference for researchers.

  7. Theoretical Aspects of Dromedaryfoil.

    DTIC Science & Technology

    1977-11-01

    Seginer were taken on a Yoshihara "A" supercritical airfoil. Steinle and Gross used a 64A010 airfoil. All the data points lie within the theoretical...experimental data that for the same airfoil, either 64A410 or 64A010 , the higher the angle of attack, the sooner the limiting pressure is reached. The...shock 13 Stivers, L.S., Jr., "Effects of Subsonic Mach Numbers on the Forces and Pressure Distributions on Four NACA 64A-Series Airfoil Sections at

  8. Theoretical basis for the Beale number

    SciTech Connect

    West, C.D.

    1981-08-01

    Th Beale number is an empirically derived figure relating the power output of a Stirling engine to working gas pressure, operating frequency, and piston displacement. It is widely used to make preliminary estimates of performance of new designs and to compare the performance of existing engines. Two separate areas of investigation are combined to give a theoretical value for the Beale number and a straightforward explanation of its physical significance. 5 refs.

  9. Theoretical percussion force of the periotest diagnosis.

    PubMed

    Kaneko, T

    1998-01-01

    The Periotest percussion force acting on a dental implant was estimated by assuming a mass-spring-dashpot model of the implant-bone system constructed on the basis of a clinical experiment. A theoretical value of about 10 N, comparable to hitherto reported experimental values, was obtained for an osseointegrated implant of about 1 g. The percussion force would probably be smaller for a heavier implant.

  10. Theoretical analysis of ARC constriction

    SciTech Connect

    Stoenescu, M.L.; Brooks, A.W.; Smith, T.M.

    1980-12-01

    The physics of the thermionic converter is governed by strong electrode-plasma interactions (emissions surface scattering, charge exchange) and weak interactions (diffusion, radiation) at the maximum interelectrode plasma radius. The physical processes are thus mostly convective in thin sheaths in front of the electrodes and mostly diffusive and radiative in the plasma bulk. The physical boundaries are open boundaries to particle transfer (electrons emitted or absorbed by the electrodes, all particles diffusing through some maximum plasma radius) and to convective, conductive and radiative heat transfer. In a first approximation the thermionic converter may be described by a one-dimensional classical transport theory. The two-dimensional effects may be significant as a result of the sheath sensitivity to radial plasma variations and of the strong sheath-plasma coupling. The current-voltage characteristic of the converter is thus the result of an integrated current density over the collector area for which the boundary conditions at each r determine the regime (ignited/unignited) of the local current density. A current redistribution strongly weighted at small radii (arc constriction) limits the converter performance and opens questions on constriction reduction possibilities. The questions addressed are the followng: (1) what are the main contributors to the loss of current at high voltage in the thermionic converter; and (2) is arc constriction observable theoretically and what are the conditions of its occurrence. The resulting theoretical problem is formulated and results are given. The converter electrical current is estimated directly from the electron and ion particle fluxes based on the spatial distribution of the electron/ion density n, temperatures T/sub e/, T/sub i/, electrical voltage V and on the knowledge of the transport coefficients. (WHK)

  11. Theoretical ecology without species

    NASA Astrophysics Data System (ADS)

    Tikhonov, Mikhail

    The sequencing-driven revolution in microbial ecology demonstrated that discrete ``species'' are an inadequate description of the vast majority of life on our planet. Developing a novel theoretical language that, unlike classical ecology, would not require postulating the existence of species, is a challenge of tremendous medical and environmental significance, and an exciting direction for theoretical physics. Here, it is proposed that community dynamics can be described in a naturally hierarchical way in terms of population fluctuation eigenmodes. The approach is applied to a simple model of division of labor in a multi-species community. In one regime, effective species with a core and accessory genome are shown to naturally appear as emergent concepts. However, the same model allows a transition into a regime where the species formalism becomes inadequate, but the eigenmode description remains well-defined. Treating a community as a black box that expresses enzymes in response to resources reveals mathematically exact parallels between a community and a single coherent organism with its own fitness function. This coherence is a generic consequence of division of labor, requires no cooperative interactions, and can be expected to be widespread in microbial ecosystems. Harvard Center of Mathematical Sciences and Applications;John A. Paulson School of Engineering and Applied Sciences.

  12. Dark matter: Theoretical perspectives

    SciTech Connect

    Turner, M.S. Fermi National Accelerator Lab., Batavia, IL )

    1993-06-01

    The author both reviews and makes the case for the current theoretical prejudice: a flat Universe whose dominant constituent is nonbaryonic dark matter, emphasizing that this is still a prejudice and not yet fact. The theoretical motivation for nonbaryonic dark matter is discussed in the context of current elementary-particle theory, stressing that (i) there are no dark-matter candidates within the [open quotes]standard model[close quotes] of particle physics, (ii) there are several compelling candidates within attractive extensions of the standard model of particle physics, and (iii) the motivation for these compelling candidates comes first and foremost from particle physics. The dark-matter problem is now a pressing issue in both cosmology and particle physics, and the detection of particle dark matter would provide evidence for [open quotes]new physics.[close quotes] The compelling candidates are a very light axion (10[sup [minus]6]--10[sup [minus]4] eV), a light neutrino (20--90 eV), and a heavy neutralino (10 GeV--2 TeV). The production of these particles in the early Universe and the prospects for their detection are also discussed. The author briefly mentions more exotic possibilities for the dark matter, including a nonzero cosmological constant, superheavy magnetic monopoles, and decaying neutrinos. 119 refs.

  13. Dark matter: theoretical perspectives.

    PubMed Central

    Turner, M S

    1993-01-01

    I both review and make the case for the current theoretical prejudice: a flat Universe whose dominant constituent is nonbaryonic dark matter, emphasizing that this is still a prejudice and not yet fact. The theoretical motivation for nonbaryonic dark matter is discussed in the context of current elementary-particle theory, stressing that (i) there are no dark-matter candidates within the "standard model" of particle physics, (ii) there are several compelling candidates within attractive extensions of the standard model of particle physics, and (iii) the motivation for these compelling candidates comes first and foremost from particle physics. The dark-matter problem is now a pressing issue in both cosmology and particle physics, and the detection of particle dark matter would provide evidence for "new physics." The compelling candidates are a very light axion (10(-6)-10(-4) eV), a light neutrino (20-90 eV), and a heavy neutralino (10 GeV-2 TeV). The production of these particles in the early Universe and the prospects for their detection are also discussed. I briefly mention more exotic possibilities for the dark matter, including a nonzero cosmological constant, superheavy magnetic monopoles, and decaying neutrinos. PMID:11607395

  14. Dark matter: Theoretical perspectives

    SciTech Connect

    Turner, M.S. |

    1993-01-01

    I both review and make the case for the current theoretical prejudice: a flat Universe whose dominant constituent is nonbaryonic dark matter, emphasizing that this is still a prejudice and not yet fact. The theoretical motivation for nonbaryonic dark matter is discussed in the context of current elementary-particle theory, stressing that: (1) there are no dark matter candidates within the standard model of particle physics; (2) there are several compelling candidates within attractive extensions of the standard model of particle physics; and (3) the motivation for these compelling candidates comes first and foremost from particle physics. The dark-matter problem is now a pressing issue in both cosmology and particle physics, and the detection of particle dark matter would provide evidence for ``new physics.`` The compelling candidates are: a very light axion ( 10{sup {minus}6} eV--10{sup {minus}4} eV); a light neutrino (20 eV--90 eV); and a heavy neutralino (10 GeV--2 TeV). The production of these particles in the early Universe and the prospects for their detection are also discussed. I briefly mention more exotic possibilities for the dark matter, including a nonzero cosmological constant, superheavy magnetic monopoles, and decaying neutrinos.

  15. Dark matter: Theoretical perspectives

    SciTech Connect

    Turner, M.S. . Enrico Fermi Inst. Fermi National Accelerator Lab., Batavia, IL )

    1993-01-01

    I both review and make the case for the current theoretical prejudice: a flat Universe whose dominant constituent is nonbaryonic dark matter, emphasizing that this is still a prejudice and not yet fact. The theoretical motivation for nonbaryonic dark matter is discussed in the context of current elementary-particle theory, stressing that: (1) there are no dark matter candidates within the standard model of particle physics; (2) there are several compelling candidates within attractive extensions of the standard model of particle physics; and (3) the motivation for these compelling candidates comes first and foremost from particle physics. The dark-matter problem is now a pressing issue in both cosmology and particle physics, and the detection of particle dark matter would provide evidence for new physics.'' The compelling candidates are: a very light axion ( 10[sup [minus]6] eV--10[sup [minus]4] eV); a light neutrino (20 eV--90 eV); and a heavy neutralino (10 GeV--2 TeV). The production of these particles in the early Universe and the prospects for their detection are also discussed. I briefly mention more exotic possibilities for the dark matter, including a nonzero cosmological constant, superheavy magnetic monopoles, and decaying neutrinos.

  16. Theoretical Particle Astrophysics

    SciTech Connect

    Kamionkowski, Marc

    2013-08-07

    Abstract: Theoretical Particle Astrophysics The research carried out under this grant encompassed work on the early Universe, dark matter, and dark energy. We developed CMB probes for primordial baryon inhomogeneities, primordial non-Gaussianity, cosmic birefringence, gravitational lensing by density perturbations and gravitational waves, and departures from statistical isotropy. We studied the detectability of wiggles in the inflation potential in string-inspired inflation models. We studied novel dark-matter candidates and their phenomenology. This work helped advance the DoE's Cosmic Frontier (and also Energy and Intensity Frontiers) by finding synergies between a variety of different experimental efforts, by developing new searches, science targets, and analyses for existing/forthcoming experiments, and by generating ideas for new next-generation experiments.

  17. Theoretical and experimental methods to select aircraft handling qualities

    NASA Astrophysics Data System (ADS)

    Zaichik, L. E.; Yashin, Y. P.; Perebatov, V. S.; Desyatnik, P. A.

    2013-12-01

    A theoretical-experimental method is developed to analyze and adequately select aircraft handling qualities (HQ). A review is presented of the criteria developed by the authors to estimate the role of motion cues in controlling of an aircraft, and criteria to estimate the on-ground simulation fidelity. The method is presented to translate on-ground simulation results into real flight conditions.

  18. Topics in theoretical astrophysics

    NASA Astrophysics Data System (ADS)

    Li, Chao

    This thesis presents a study of various interesting problems in theoretical astrophysics, including gravitational wave astronomy, gamma ray bursts and cosmology. Chapters 2, 3 and 4 explore prospects for detecting gravitational waves from stellar-mass compact objects spiraling into intermediate-mass black holes with ground-based observatories. It is shown in chapter 2 that if the central body is not a BH but its metric is stationary, axisymmetric, reflection symmetric and asymptotically flat, then the waves will likely be triperiodic, as for a BH. Chapters 3 and 4 show that the evolutions of the waves' three fundamental frequencies and of the complex amplitudes of their spectral components encode (in principle) details of the central body's metric, the energy and angular momentum exchange between the central body and the orbit, and the time-evolving orbital elements. Chapter 5 studies a local readout method to enhance the low frequency sensitivity of detuned signal-recycling interferometers. We provide both the results of improvement in quantum noise and the implementation details in Advanced LIGO. Chapter 6 applies and generalizes causal Wiener filter to data analysis in macroscopic quantum mechanical experiments. With the causal Wiener filter method, we demonstrate that in theory we can put the test masses in the interferometer to its quantum mechanical ground states. Chapter 7 presents some analytical solutions for expanding fireballs, the common theoretical model for gamma ray bursts and soft gamma ray repeaters. We apply our results to SGR 1806-20 and rediscover the mismatch between the model and the afterglow observations. Chapter 8 discusses the reconstruction of the scalar-field potential of the dark energy. We advocate direct reconstruction of the scalar field potential as a way to minimize prior assumptions on the shape, and thus minimize the introduction of bias in the derived potential. Chapter 9 discusses gravitational lensing modifications to cosmic

  19. Information Theoretic Shape Matching.

    PubMed

    Hasanbelliu, Erion; Giraldo, Luis Sanchez; Príncipe, José C

    2014-12-01

    In this paper, we describe two related algorithms that provide both rigid and non-rigid point set registration with different computational complexity and accuracy. The first algorithm utilizes a nonlinear similarity measure known as correntropy. The measure combines second and high order moments in its decision statistic showing improvements especially in the presence of impulsive noise. The algorithm assumes that the correspondence between the point sets is known, which is determined with the surprise metric. The second algorithm mitigates the need to establish a correspondence by representing the point sets as probability density functions (PDF). The registration problem is then treated as a distribution alignment. The method utilizes the Cauchy-Schwarz divergence to measure the similarity/distance between the point sets and recover the spatial transformation function needed to register them. Both algorithms utilize information theoretic descriptors; however, correntropy works at the realizations level, whereas Cauchy-Schwarz divergence works at the PDF level. This allows correntropy to be less computationally expensive, and for correct correspondence, more accurate. The two algorithms are robust against noise and outliers and perform well under varying levels of distortion. They outperform several well-known and state-of-the-art methods for point set registration.

  20. Adventures in theoretical astrophysics

    NASA Astrophysics Data System (ADS)

    Farmer, Alison Jane

    This thesis is a tour of topics in theoretical astrophysics, unified by their diversity and their pursuit of physical understanding of astrophysical phenomena. In the first chapter, we raise the possibility of the detection of white dwarfs in transit surveys for extrasolar Earths, and discuss the peculiarities of detecting these more massive objects. A population synthesis calculation of the gravitational wave background from extragalactic binary stars is then presented. In this study, we establish a firm understanding of the uncertainties in such a calculation and provide a valuable reference for planning the Laser Interferometer Space Antenna mission. The long-established problem of cosmic ray confinement to the Galaxy is addressed in another chapter. We introduce a new wave damping mechanism, due to the presence of background turbulence, that prevents the confinement of cosmic rays by the resonant streaming instability. We also investigate the spokes in Saturn's B ring, an electrodynamic mystery that is being illuminated by new data sent back from the Cassini spacecraft. In particular, we present assessments of the presence of charged dust near the rings, and the size of currents and electric fields in the ring system. We make inferences from the Cassini discovery of oxygen ions above the rings. In addition, the previous leading theory for spoke formation is demonstrated to be unphysical. In the final chapter, we explain the wayward motions of Prometheus and Pandora, two small moons of Saturn. Previously found to be chaotic as a result of mutual interactions, we account for their behavior by analogy with a parametric pendulum. We caution that this behavior may soon enter a new regime.

  1. TAD- THEORETICAL AERODYNAMICS PROGRAM

    NASA Technical Reports Server (NTRS)

    Barrowman, J.

    1994-01-01

    This theoretical aerodynamics program, TAD, was developed to predict the aerodynamic characteristics of vehicles with sounding rocket configurations. These slender, axisymmetric finned vehicle configurations have a wide range of aeronautical applications from rockets to high speed armament. Over a given range of Mach numbers, TAD will compute the normal force coefficient derivative, the center-of-pressure, the roll forcing moment coefficient derivative, the roll damping moment coefficient derivative, and the pitch damping moment coefficient derivative of a sounding rocket configured vehicle. The vehicle may consist of a sharp pointed nose of cone or tangent ogive shape, up to nine other body divisions of conical shoulder, conical boattail, or circular cylinder shape, and fins of trapezoid planform shape with constant cross section and either three or four fins per fin set. The characteristics computed by TAD have been shown to be accurate to within ten percent of experimental data in the supersonic region. The TAD program calculates the characteristics of separate portions of the vehicle, calculates the interference between separate portions of the vehicle, and then combines the results to form a total vehicle solution. Also, TAD can be used to calculate the characteristics of the body or fins separately as an aid in the design process. Input to the TAD program consists of simple descriptions of the body and fin geometries and the Mach range of interest. Output includes the aerodynamic characteristics of the total vehicle, or user-selected portions, at specified points over the mach range. The TAD program is written in FORTRAN IV for batch execution and has been implemented on an IBM 360 computer with a central memory requirement of approximately 123K of 8 bit bytes. The TAD program was originally developed in 1967 and last updated in 1972.

  2. Missing Data and IRT Item Parameter Estimation.

    ERIC Educational Resources Information Center

    DeMars, Christine

    The situation of nonrandomly missing data has theoretically different implications for item parameter estimation depending on whether joint maximum likelihood or marginal maximum likelihood methods are used in the estimation. The objective of this paper is to illustrate what potentially can happen, under these estimation procedures, when there is…

  3. Topics in Theoretical Physics

    SciTech Connect

    Cohen, Andrew; Schmaltz, Martin; Katz, Emmanuel; Rebbi, Claudio; Glashow, Sheldon; Brower, Richard; Pi, So-Young

    2016-09-30

    This award supported a broadly based research effort in theoretical particle physics, including research aimed at uncovering the laws of nature at short (subatomic) and long (cosmological) distances. These theoretical developments apply to experiments in laboratories such as CERN, the facility that operates the Large Hadron Collider outside Geneva, as well as to cosmological investigations done using telescopes and satellites. The results reported here apply to physics beyond the so-called Standard Model of particle physics; physics of high energy collisions such as those observed at the Large Hadron Collider; theoretical and mathematical tools and frameworks for describing the laws of nature at short distances; cosmology and astrophysics; and analytic and computational methods to solve theories of short distance physics. Some specific research accomplishments include + Theories of the electroweak interactions, the forces that give rise to many forms of radioactive decay; + Physics of the recently discovered Higgs boson. + Models and phenomenology of dark matter, the mysterious component of the universe, that has so far been detected only by its gravitational effects. + High energy particles in astrophysics and cosmology. + Algorithmic research and Computational methods for physics of and beyond the Standard Model. + Theory and applications of relativity and its possible limitations. + Topological effects in field theory and cosmology. + Conformally invariant systems and AdS/CFT. This award also supported significant training of students and postdoctoral fellows to lead the research effort in particle theory for the coming decades. These students and fellows worked closely with other members of the group as well as theoretical and experimental colleagues throughout the physics community. Many of the research projects funded by this grant arose in response to recently obtained experimental results in the areas of particle physics and cosmology. We describe a few of

  4. The Basic Theoretical Framework

    NASA Astrophysics Data System (ADS)

    Loeb, Abraham

    Cosmology is by now a mature experimental science. We are privileged to live at a time when the story of genesis (how the Universe started and developed) can be critically explored by direct observations. Looking deep into the Universe through powerful telescopes, we can see images of the Universe when it was younger because of the finite time it takes light to travel to us from distant sources. Existing data sets include an image of the Universe when it was 0.4 million years old (in the form of the cosmic microwave background), as well as images of individual galaxies when the Universe was older than a billion years. But there is a serious challenge: in between these two epochs was a period when the Universe was dark, stars had not yet formed, and the cosmic microwave background no longer traced the distribution of matter. And this is precisely the most interesting period, when the primordial soup evolved into the rich zoo of objects we now see. The observers are moving ahead along several fronts. The first involves the construction of large infrared telescopes on the ground and in space, that will provide us with new photos of the first galaxies. Current plans include ground-based telescopes which are 24-42 m in diameter, and NASA's successor to the Hubble Space Telescope, called the James Webb Space Telescope. In addition, several observational groups around the globe are constructing radio arrays that will be capable of mapping the three-dimensional distribution of cosmic hydrogen in the infant Universe. These arrays are aiming to detect the long-wavelength (redshifted 21-cm) radio emission from hydrogen atoms. The images from these antenna arrays will reveal how the non-uniform distribution of neutral hydrogen evolved with cosmic time and eventually was extinguished by the ultra-violet radiation from the first galaxies. Theoretical research has focused in recent years on predicting the expected signals for the above instruments and motivating these ambitious

  5. Theoretical models for supernovae

    SciTech Connect

    Woosley, S.E.; Weaver, T.A.

    1981-09-21

    The results of recent numerical simulations of supernova explosions are presented and a variety of topics discussed. Particular emphasis is given to (i) the nucleosynthesis expected from intermediate mass (10sub solar less than or equal to M less than or equal to 100 Msub solar) Type II supernovae and detonating white dwarf models for Type I supernovae, (ii) a realistic estimate of the ..gamma..-line fluxes expected from this nucleosynthesis, (iii) the continued evolution, in one and two dimensions, of intermediate mass stars wherein iron core collapse does not lead to a strong, mass-ejecting shock wave, and (iv) the evolution and explosion of vary massive stars (M greater than or equal to 100 Msub solar of both Population I and III. In one dimension, nuclear burning following a failed core bounce does not appear likely to lead to a supernova explosion although, in two dimensions, a combination of rotation and nuclear burning may do so. Near solar proportions of elements from neon to calcium and very brilliant optical displays may be created by hypernovae, the explosions of stars in the mass range 100 M/sub solar/ to 300 M/sub solar/. Above approx. 300 M/sub solar/ a black hole is created by stellar collapse following carbon ignition. Still more massive stars may be copious producers of /sup 4/He and /sup 14/N prior to their collapse on the pair instability.

  6. Theoretical basis for the Beale number

    NASA Astrophysics Data System (ADS)

    West, C. D.

    The Beale number is an important, empirically derived figure relating the power output of a Stirling engine to working gas pressure, operating frequency, and piston displacement. It is widely used to make preliminary estimates of performance of new designs and to compare the performance of existing engines. Two separate areas of investigation (the simplified formula for power output of an ideal machine first derived by Cooke-Yarborough, and the actual performance ratings of several real engines collected by Martini) are combined to give a theoretical value for the Beale number and a straightforward explanation of its physical significance. The derived value is in good agreement with the empirical figure and is consistent with Walker's estimates of the temperature dependence of the Beale number.

  7. Estimation of Fluid Properties and Phase Equilibria.

    ERIC Educational Resources Information Center

    Herskowitz, M.

    1985-01-01

    Describes a course (given to junior/senior students with strong background in thermodynamics and transport phenomena) that covers the theoretical and practical aspects of properties estimation. An outline for the course is included. (JN)

  8. Attitude Estimation or Quaternion Estimation?

    NASA Technical Reports Server (NTRS)

    Markley, F. Landis

    2003-01-01

    The attitude of spacecraft is represented by a 3x3 orthogonal matrix with unity determinant, which belongs to the three-dimensional special orthogonal group SO(3). The fact that all three-parameter representations of SO(3) are singular or discontinuous for certain attitudes has led to the use of higher-dimensional nonsingular parameterizations, especially the four-component quaternion. In attitude estimation, we are faced with the alternatives of using an attitude representation that is either singular or redundant. Estimation procedures fall into three broad classes. The first estimates a three-dimensional representation of attitude deviations from a reference attitude parameterized by a higher-dimensional nonsingular parameterization. The deviations from the reference are assumed to be small enough to avoid any singularity or discontinuity of the three-dimensional parameterization. The second class, which estimates a higher-dimensional representation subject to enough constraints to leave only three degrees of freedom, is difficult to formulate and apply consistently. The third class estimates a representation of SO(3) with more than three dimensions, treating the parameters as independent. We refer to the most common member of this class as quaternion estimation, to contrast it with attitude estimation. We analyze the first and third of these approaches in the context of an extended Kalman filter with simplified kinematics and measurement models.

  9. Non-invasive method for the aortic blood pressure waveform estimation using the measured radial EBI

    NASA Astrophysics Data System (ADS)

    Krivoshei, Andrei; Lamp, Jürgen; Min, Mart; Uuetoa, Tiina; Uuetoa, Hasso; Annus, Paul

    2013-04-01

    The paper presents a method for the Central Aortic Pressure (CAP) waveform estimation from the measured radial Electrical Bio-Impedance (EBI). The method proposed here is a non-invasive and health-safe approach to estimate the cardiovascular system parameters, such as the Augmentation Index (AI). Reconstruction of the CAP curve from the EBI data is provided by spectral domain transfer functions (TF), found on the bases of data analysis. Clinical experiments were carried out on 30 patients in the Center of Cardiology of East-Tallinn Central Hospital during coronary angiography on patients in age of 43 to 80 years. The quality and reliability of the method was tested by comparing the evaluated augmentation indices obtained from the invasively measured CAP data and from the reconstructed curve. The correlation coefficient r = 0.89 was calculated in the range of AICAP values from 5 to 28. Comparing to the traditional tonometry based method, the developed one is more convenient to use and it allows long-term monitoring of the AI, what is not possible with tonometry probes.

  10. Estimating the Modified Allan Variance

    NASA Technical Reports Server (NTRS)

    Greenhall, Charles

    1995-01-01

    A paper at the 1992 FCS showed how to express the modified Allan variance (mvar) in terms of the third difference of the cumulative sum of time residuals. Although this reformulated definition was presented merely as a computational trick for simplifying the calculation of mvar estimates, it has since turned out to be a powerful theoretical tool for deriving the statistical quality of those estimates in terms of their equivalent degrees of freedom (edf), defined for an estimator V by edf V = 2(EV)2/(var V). Confidence intervals for mvar can then be constructed from levels of the appropriate 2 distribution.

  11. Theoretical Principles of Distance Education.

    ERIC Educational Resources Information Center

    Keegan, Desmond, Ed.

    This book contains the following papers examining the didactic, academic, analytic, philosophical, and technological underpinnings of distance education: "Introduction"; "Quality and Access in Distance Education: Theoretical Considerations" (D. Randy Garrison); "Theory of Transactional Distance" (Michael G. Moore);…

  12. Theoretical Foundations of Learning Communities

    ERIC Educational Resources Information Center

    Jessup-Anger, Jody E.

    2015-01-01

    This chapter describes the historical and contemporary theoretical underpinnings of learning communities and argues that there is a need for more complex models in conceptualizing and assessing their effectiveness.

  13. An integrative nursing theoretical framework.

    PubMed

    Schmieding, N J

    1990-04-01

    The use of an integrative nursing theoretical framework for both clinical and administrative practice has recently been suggested. The author developed a theoretical framework which incorporates key concepts from the writings of Ida J. Orlando and Virginia Henderson and proposes it to be used as an integrative framework. The rationale for using a framework is discussed along with clinical and administrative examples of how to integrate concepts from the proposed framework. The reasons for using an integrative theoretical framework are that it: serves as a guide for both clinical and administrative decisions; forms the basis of the nursing philosophy; facilitates communication with patients and colleagues; helps identify congruent supporting theories and concepts; provides a basis for educational programmes; helps to differentiate nursing from non-nursing activities; and enhances nurse unity and self-esteem. The premise of the article is that benefits are derived from the use of a nursing theoretical framework because it provides a specific vision of nursing.

  14. Theoretical Studies of Nanocluster Formation

    DTIC Science & Technology

    2016-05-26

    Briefing Charts 3. DATES COVERED (From - To) 22 April 2016 - 25 May 2016 4. TITLE AND SUBTITLE Theoretical Studies of nanocluster formation 5a. CONTRACT...Date: 5/5/2016 14. ABSTRACT Viewgraph/Briefing Charts 15. SUBJECT TERMS N/A 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT 18...SAR 17 19b. TELEPHONE NO (include area code) N/A Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std. 239.18 Theoretical studies of

  15. Condition Number Regularized Covariance Estimation.

    PubMed

    Won, Joong-Ho; Lim, Johan; Kim, Seung-Jean; Rajaratnam, Bala

    2013-06-01

    Estimation of high-dimensional covariance matrices is known to be a difficult problem, has many applications, and is of current interest to the larger statistics community. In many applications including so-called the "large p small n" setting, the estimate of the covariance matrix is required to be not only invertible, but also well-conditioned. Although many regularization schemes attempt to do this, none of them address the ill-conditioning problem directly. In this paper, we propose a maximum likelihood approach, with the direct goal of obtaining a well-conditioned estimator. No sparsity assumption on either the covariance matrix or its inverse are are imposed, thus making our procedure more widely applicable. We demonstrate that the proposed regularization scheme is computationally efficient, yields a type of Steinian shrinkage estimator, and has a natural Bayesian interpretation. We investigate the theoretical properties of the regularized covariance estimator comprehensively, including its regularization path, and proceed to develop an approach that adaptively determines the level of regularization that is required. Finally, we demonstrate the performance of the regularized estimator in decision-theoretic comparisons and in the financial portfolio optimization setting. The proposed approach has desirable properties, and can serve as a competitive procedure, especially when the sample size is small and when a well-conditioned estimator is required.

  16. Optimized tuner selection for engine performance estimation

    NASA Technical Reports Server (NTRS)

    Simon, Donald L. (Inventor); Garg, Sanjay (Inventor)

    2013-01-01

    A methodology for minimizing the error in on-line Kalman filter-based aircraft engine performance estimation applications is presented. This technique specifically addresses the underdetermined estimation problem, where there are more unknown parameters than available sensor measurements. A systematic approach is applied to produce a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. Tuning parameter selection is performed using a multi-variable iterative search routine which seeks to minimize the theoretical mean-squared estimation error. Theoretical Kalman filter estimation error bias and variance values are derived at steady-state operating conditions, and the tuner selection routine is applied to minimize these values. The new methodology yields an improvement in on-line engine performance estimation accuracy.

  17. ESTIMATING BASAL ENERGY EXPENDITURE IN LIVER TRANSPLANT RECIPIENTS: THE VALUE OF THE HARRIS-BENEDICT EQUATION

    PubMed Central

    PINTO, Andressa S.; CHEDID, Marcio F.; GUERRA, Léa T.; ÁLVARES-DA-SILVA, Mario R.; de ARAÚJO, Alexandre; GUIMARÃES, Luciano S.; LEIPNITZ, Ian; CHEDID, Aljamir D.; KRUEL, Cleber R. P.; GREZZANA-FILHO, Tomaz J. M.; KRUEL, Cleber D. P.

    2016-01-01

    ABSTRACT Background: Reliable measurement of basal energy expenditure (BEE) in liver transplant (LT) recipients is necessary for adapting energy requirements, improving nutritional status and preventing weight gain. Indirect calorimetry (IC) is the gold standard for measuring BEE. However, BEE may be estimated through alternative methods, including electrical bioimpedance (BI), Harris-Benedict Equation (HBE), and Mifflin-St. Jeor Equation (MSJ) that carry easier applicability and lower cost. Aim: To determine which of the three alternative methods for BEE estimation (HBE, BI and MSJ) would provide most reliable BEE estimation in LT recipients. Methods: Prospective cross-sectional study including dyslipidemic LT recipients in follow-up at a 735-bed tertiary referral university hospital. Comparisons of BEE measured through IC to BEE estimated through each of the three alternative methods (HBE, BI and MSJ) were performed using Bland-Altman method and Wilcoxon Rank Sum test. Results: Forty-five patients were included, aged 58±10 years. BEE measured using IC was 1664±319 kcal for males, and 1409±221 kcal for females. Average difference between BEE measured by IC (1534±300 kcal) and BI (1584±377 kcal) was +50 kcal (p=0.0384). Average difference between the BEE measured using IC (1534±300 kcal) and MSJ (1479.6±375 kcal) was -55 kcal (p=0.16). Average difference between BEE values measured by IC (1534±300 kcal) and HBE (1521±283 kcal) was -13 kcal (p=0.326). Difference between BEE estimated through IC and HBE was less than 100 kcal for 39 of all 43patients. Conclusions: Among the three alternative methods, HBE was the most reliable for estimating BEE in LT recipients. PMID:27759783

  18. Thermodynamic estimation: Ionic materials

    SciTech Connect

    Glasser, Leslie

    2013-10-15

    Thermodynamics establishes equilibrium relations among thermodynamic parameters (“properties”) and delineates the effects of variation of the thermodynamic functions (typically temperature and pressure) on those parameters. However, classical thermodynamics does not provide values for the necessary thermodynamic properties, which must be established by extra-thermodynamic means such as experiment, theoretical calculation, or empirical estimation. While many values may be found in the numerous collected tables in the literature, these are necessarily incomplete because either the experimental measurements have not been made or the materials may be hypothetical. The current paper presents a number of simple and relible estimation methods for thermodynamic properties, principally for ionic materials. The results may also be used as a check for obvious errors in published values. The estimation methods described are typically based on addition of properties of individual ions, or sums of properties of neutral ion groups (such as “double” salts, in the Simple Salt Approximation), or based upon correlations such as with formula unit volumes (Volume-Based Thermodynamics). - Graphical abstract: Thermodynamic properties of ionic materials may be readily estimated by summation of the properties of individual ions, by summation of the properties of ‘double salts’, and by correlation with formula volume. Such estimates may fill gaps in the literature, and may also be used as checks of published values. This simplicity arises from exploitation of the fact that repulsive energy terms are of short range and very similar across materials, while coulombic interactions provide a very large component of the attractive energy in ionic systems. Display Omitted - Highlights: • Estimation methods for thermodynamic properties of ionic materials are introduced. • Methods are based on summation of single ions, multiple salts, and correlations. • Heat capacity, entropy

  19. Computational Estimation

    ERIC Educational Resources Information Center

    Fung, Maria G.; Latulippe, Christine L.

    2010-01-01

    Elementary school teachers are responsible for constructing the foundation of number sense in youngsters, and so it is recommended that teacher-training programs include an emphasis on number sense to ensure the development of dynamic, productive computation and estimation skills in students. To better prepare preservice elementary school teachers…

  20. Estimation Destinations.

    ERIC Educational Resources Information Center

    Threewit, Fran

    This book leads students through a journey of hands-on investigations of skill-based estimation. The 30 lessons in the book are grouped into four units: Holding Hands, The Real Scoop, Container Calculations, and Estimeasurements. In each unit children work with unique, real materials intended to build an awareness of number, quantity, and…

  1. The Problems of Multiple Feedback Estimation.

    ERIC Educational Resources Information Center

    Bulcock, Jeffrey W.

    The use of two-stage least squares (2SLS) for the estimation of feedback linkages is inappropriate for nonorthogonal data sets because 2SLS is extremely sensitive to multicollinearity. It is argued that what is needed is use of a different estimating criterion than the least squares criterion. Theoretically the variance normalization criterion has…

  2. Influence of heart motion on cardiac output estimation by means of electrical impedance tomography: a case study.

    PubMed

    Proença, Martin; Braun, Fabian; Rapin, Michael; Solà, Josep; Adler, Andy; Grychtol, Bartłomiej; Bohm, Stephan H; Lemay, Mathieu; Thiran, Jean-Philippe

    2015-06-01

    Electrical impedance tomography (EIT) is a non-invasive imaging technique that can measure cardiac-related intra-thoracic impedance changes. EIT-based cardiac output estimation relies on the assumption that the amplitude of the impedance change in the ventricular region is representative of stroke volume (SV). However, other factors such as heart motion can significantly affect this ventricular impedance change. In the present case study, a magnetic resonance imaging-based dynamic bio-impedance model fitting the morphology of a single male subject was built. Simulations were performed to evaluate the contribution of heart motion and its influence on EIT-based SV estimation. Myocardial deformation was found to be the main contributor to the ventricular impedance change (56%). However, motion-induced impedance changes showed a strong correlation (r = 0.978) with left ventricular volume. We explained this by the quasi-incompressibility of blood and myocardium. As a result, EIT achieved excellent accuracy in estimating a wide range of simulated SV values (error distribution of 0.57 ± 2.19 ml (1.02 ± 2.62%) and correlation of r = 0.996 after a two-point calibration was applied to convert impedance values to millilitres). As the model was based on one single subject, the strong correlation found between motion-induced changes and ventricular volume remains to be verified in larger datasets.

  3. Theoretical design of lightning panel

    NASA Astrophysics Data System (ADS)

    Emetere, M. E.; Olawole, O. F.; Sanni, S. E.

    2016-02-01

    The light trapping device (LTD) was theoretically designed to suggests the best way of harvesting the energy derived from natural lightning. The Maxwell's equation was expanded using a virtual experimentation via a MATLAB environment. Several parameters like lightning flash and temperature distribution were consider to investigate the ability of the theoretical lightning panel to convert electricity efficiently. The results of the lighting strike angle on the surface of the LTD shows the maximum power expected per time. The results of the microscopic thermal distribution shows that if the LTD casing controls the transmission of the heat energy, then the thermal energy storage (TES) can be introduced to the lightning farm.

  4. Theoretical Biology: Organisms and Mechanisms

    NASA Astrophysics Data System (ADS)

    Landauer, Christopher; Bellman, Kirstie L.

    2002-09-01

    The Theoretical Biology Program initiated by Robert Rosen is intended to identify the key theoretical characteristics of organisms, especially those that distinguish organisms from mechanisms, by looking for the proper abstractions and defining the appropriate relationships. There are strong claims about the distinctions in Rosen's book "Life Itself", along with some purported proofs of these assertions. Unfortunately, the Mathematics is incorrect, and the assertions remain unproven (and some of them are simply false). In this paper, we present the ideas of Rosen's approach, demonstrate that his Mathematical formulations and proofs are wrong, and then show how they might be made more successful.

  5. Theoretical analysis of the EWEC report

    NASA Technical Reports Server (NTRS)

    1976-01-01

    This analytic investigation shows how the electromagnetic wave energy conversion (EWEC) device, as used for solar-to-electric power conversion, is significantly different from solar cells, with respect to principles of operation. An optimistic estimate of efficiency is about 80% for a full-wave rectifying configuration with solar radiation normally incident. This compares favorably with the theoretical maximum for a CdTe solar cell (23.5%), as well as with the efficiencies of more familiar cells: Si (19.5%), InP (21.5%), and GaAs (23%). Some key technological issues that must be resolved before the EWEC device can be realized are identified. Those issues include: the fabrication of a pn semi-conductor junction with no permittivity resonances in the optical band; and the efficient channeling of the power received by countless microscopic horn antennas through a relatively few number of wires.

  6. Theoretical calculation of polarizability isotope effects.

    PubMed

    Moncada, Félix; Flores-Moreno, Roberto; Reyes, Andrés

    2017-03-01

    We propose a scheme to estimate hydrogen isotope effects on molecular polarizabilities. This approach combines the any-particle molecular orbital method, in which both electrons and H/D nuclei are described as quantum waves, with the auxiliary density perturbation theory, to calculate analytically the polarizability tensor. We assess the performance of method by calculating the polarizability isotope effect for 20 molecules. A good correlation between theoretical and experimental data is found. Further analysis of the results reveals that the change in the polarizability of a X-H bond upon deuteration decreases as the electronegativity of X increases. Our investigation also reveals that the molecular polarizability isotope effect presents an additive character. Therefore, it can be computed by counting the number of deuterated bonds in the molecule.

  7. Theoretical molecular studies of astrophysical interest

    NASA Technical Reports Server (NTRS)

    Flynn, George

    1991-01-01

    When work under this grant began in 1974 there was a great need for state-to-state collisional excitation rates for interstellar molecules observed by radio astronomers. These were required to interpret observed line intensities in terms of local temperatures and densities, but, owing to lack of experimental or theoretical values, estimates then being used for this purpose ranged over several orders of magnitude. A problem of particular interest was collisional excitation of formaldehyde; Townes and Cheung had suggested that the relative size of different state-to-state rates (propensity rules) was responsible for the anomalous absorption observed for this species. We believed that numerical molecular scattering techniques (in particular the close coupling or coupled channel method) could be used to obtain accurate results, and that these would be computationally feasible since only a few molecular rotational levels are populated at the low temperatures thought to prevail in the observed regions. Such calculations also require detailed knowledge of the intermolecular forces, but we thought that those could also be obtained with sufficient accuracy by theoretical (quantum chemical) techniques. Others, notably Roy Gordon at Harvard, had made progress in solving the molecular scattering equations, generally using semi-empirical intermolecular potentials. Work done under this grant generalized Gordon's scattering code, and introduced the use of theoretical interaction potentials obtained by solving the molecular Schroedinger equation. Earlier work had considered only the excitation of a diatomic molecule by collisions with an atom, and we extended the formalism to include excitation of more general molecular rotors (e.g., H2CO, NH2, and H2O) and also collisions of two rotors (e.g., H2-H2).

  8. Space Service Market (Theoretical Aspect)

    NASA Astrophysics Data System (ADS)

    Prisniakov, V. F.; Prisniakova, L. M.

    The authors propose a mathematical model of the demand and supply in the market economics and in the market of space services, in particular. A theoretical demand formula and a real curve demand are compared. The market equilibrium price is defined. The space market dynamics is studied. The calculations are carried out for the parameters which are close to the market of space services.

  9. Theoretical Perspectives for Developmental Education.

    ERIC Educational Resources Information Center

    Lundell, Dana Britt, Ed.; Higbee, Jeanne L., Ed.

    This monograph from the University of Minnesota General College (GC) discusses theoretical perspectives on developmental education from both new and established standpoints. GC voluntarily eliminated its degree programs in order to focus on preparing under-prepared students for transfer to the university system. GC's curricular model includes a…

  10. Theoretical understanding of charm decays

    SciTech Connect

    Bigi, I.I.

    1986-08-01

    A detailed description of charm decays has emerged. The various concepts involved are sketched. Although this description is quite successful in reproducing the data the chapter on heavy flavour decays is far from closed. Relevant questions like on th real strength of weak annihilation, Penguin operators, etc. are still unanswered. Important directions in future work, both on the experimental and theoretical side are identified.

  11. Theoretical Foundations of Learning Environments

    ERIC Educational Resources Information Center

    Jonassen, David H., Ed.; Land, Susan M., Ed.

    1999-01-01

    "Theoretical Foundations of Learning Environments" describes the most contemporary psychological and pedagogical theories that are foundations for the conception and design of open-ended learning environments and new applications of educational technologies. In the past decade, the cognitive revolution of the 60s and 70s has been…

  12. Lightning Talks 2015: Theoretical Division

    SciTech Connect

    Shlachter, Jack S.

    2015-11-25

    This document is a compilation of slides from a number of student presentations given to LANL Theoretical Division members. The subjects cover the range of activities of the Division, including plasma physics, environmental issues, materials research, bacterial resistance to antibiotics, and computational methods.

  13. Asking Research Questions: Theoretical Presuppositions

    ERIC Educational Resources Information Center

    Tenenberg, Josh

    2014-01-01

    Asking significant research questions is a crucial aspect of building a research foundation in computer science (CS) education. In this article, I argue that the questions that we ask are shaped by internalized theoretical presuppositions about how the social and behavioral worlds operate. And although such presuppositions are essential in making…

  14. Data, Methods, and Theoretical Implications

    ERIC Educational Resources Information Center

    Hannagan, Rebecca J.; Schneider, Monica C.; Greenlee, Jill S.

    2012-01-01

    Within the subfields of political psychology and the study of gender, the introduction of new data collection efforts, methodologies, and theoretical approaches are transforming our understandings of these two fields and the places at which they intersect. In this article we present an overview of the research that was presented at a National…

  15. High-accuracy theoretical thermochemistry of fluoroethanes.

    PubMed

    Nagy, Balázs; Csontos, Botond; Csontos, József; Szakács, Péter; Kállay, Mihály

    2014-07-03

    A highly accurate coupled-cluster-based ab initio model chemistry has been applied to calculate the thermodynamic functions including enthalpies of formation and standard entropies for fluorinated ethane derivatives, C2HxF6-x (x = 0-5), as well as ethane, C2H6. The invoked composite protocol includes contributions up to quadruple excitations in coupled-cluster (CC) theory as well as corrections beyond the nonrelativistic and Born-Oppenheimer approximations. For species CH2F-CH2F, CH2F-CHF2, and CHF2-CHF2, where anti/gauche isomerism occurs due to the hindered rotation around the C-C bond, conformationally averaged enthalpies and entropies at 298.15 K are also calculated. The results obtained here are in reasonable agreement with previous experimental and theoretical findings, and for all fluorinated ethanes except CH2FCH3 and C2F6 this study delivers the best available theoretical enthalpy and entropy estimates.

  16. Electron microscopy and theoretical modeling of cochleates.

    PubMed

    Nagarsekar, Kalpa; Ashtikar, Mukul; Thamm, Jana; Steiniger, Frank; Schacher, Felix; Fahr, Alfred; May, Sylvio

    2014-11-11

    Cochleates are self-assembled cylindrical condensates that consist of large rolled-up lipid bilayer sheets and represent a novel platform for oral and systemic delivery of therapeutically active medicinal agents. With few preceding investigations, the physical basis of cochleate formation has remained largely unexplored. We address the structure and stability of cochleates in a combined experimental/theoretical approach. Employing different electron microscopy methods, we provide evidence for cochleates consisting of phosphatidylserine and calcium to be hollow tubelike structures with a well-defined constant lamellar repeat distance and statistically varying inner and outer radii. To rationalize the relation between inner and outer radii, we propose a theoretical model. Based on the minimization of a phenomenological free energy expression containing a bending, adhesion, and frustration contribution, we predict the optimal tube dimensions of a cochleate and estimate ratios of material constants for cochleates consisting of phosphatidylserines with varied hydrocarbon chain structures. Knowing and understanding these ratios will ultimately benefit the successful formulation of cochleates for drug delivery applications.

  17. An information theoretic approach to pedigree reconstruction.

    PubMed

    Almudevar, Anthony

    2016-02-01

    Network structure is a dominant feature of many biological systems, both at the cellular level and within natural populations. Advances in genotype and gene expression screening made over the last few decades have permitted the reconstruction of these networks. However, resolution to a single model estimate will generally not be possible, leaving open the question of the appropriate method of formal statistical inference. The nonstandard structure of the problem precludes most traditional statistical methodologies. Alternatively, a Bayesian approach provides a natural methodology for formal inference. Construction of a posterior density on the space of network structures allows formal inference regarding features of network structure using specific marginal posterior distributions. An information theoretic approach to this problem will be described, based on the Minimum Description Length principle. This leads to a Bayesian inference model based on the information content of data rather than on more commonly used probabilistic models. The approach is applied to the problem of pedigree reconstruction based on genotypic data. Using this application, it is shown how the MDL approach is able to provide a truly objective control for model complexity. A two-cohort model is used for a simulation study. The MDL approach is compared to COLONY-2, a well known pedigree reconstruction application. The study highlights the problem of genotyping error modeling. COLONY-2 requires prior error rate estimates, and its accuracy proves to be highly sensitive to these estimates. In contrast, the MDL approach does not require prior error rate estimates, and is able to accurately adjust for genotyping error across the range of models considered.

  18. Site characterization: a spatial estimation approach

    SciTech Connect

    Candy, J.V.; Mao, N.

    1980-10-01

    In this report the application of spatial estimation techniques or kriging to groundwater aquifers and geological borehole data is considered. The adequacy of these techniques to reliably develop contour maps from various data sets is investigated. The estimator is developed theoretically in a simplified fashion using vector-matrix calculus. The practice of spatial estimation is discussed and the estimator is then applied to two groundwater aquifer systems and used also to investigate geological formations from borehole data. It is shown that the estimator can provide reasonable results when designed properly.

  19. Identification of operational clues to dry weight prescription in hemodialysis using bioimpedance vector analysis. The Italian Hemodialysis-Bioelectrical Impedance Analysis (HD-BIA) Study Group.

    PubMed

    Piccoli, A

    1998-04-01

    In patients undergoing hemodialysis (HD) cyclic body fluid changes are estimated by body weight variations, which may be misleading. Conventional bioelectrical impedance analysis (BIA) produces biased estimates of fluids in HD due to the assumption of constant tissue hydration. We used an assumption-free assessment of hydration based on direct measurements of the impedance vector. The impedance vector (standard BIA at 50 kHz frequency) was measured in 1367 HD patients, ages 16 to 89 years with BMI 17 to 31 kg/m2, 1116 asymptomatic (680 M and 436 F), and 251 with recurrent HD hypotension (118 M and 133 F) before and after two HD sessions (thrice weekly bicarbonate dialysis, 210 to 240 min) removing 2.7 kg fluid. The vector distribution of HD patients was compared to 726 healthy subjects with the same age and BMI range. Individual vector measurements (resistance and reactance components) were plotted on the gender specific 50th, 75th and 95th percentiles of the vector distribution in the healthy population (reference tolerance ellipses) as a resistance-reactance graph (RXc graph). The wet-dry weight cycling of HD patients was represented on the resistance-reactance plane with a definite, cyclical, backward-forward displacement of the impedance vector. The vectors of patients with HD hypotension were less steep and more often shifted to the right, out of the reference 75% tolerance ellipse, than asymptomatic patients. A wet-dry weight prescription, based on BIA indications, would bring the vectors of patients back into the 75% reference ellipse, where tissue electrical conductivity is restored. Whether HD patients with vector cycling within the normal third quartile ellipse have better outcome awaits confirmation by longitudinal evaluation.

  20. A theoretical investigation of thermodynamic effects on developed cavitation

    NASA Technical Reports Server (NTRS)

    Weir, D. S.

    1976-01-01

    The results of a theoretical investigation of thermodynamic effects on developed cavitation are presented. An approximate solution to the conservation equations for a two-phase laminar boundary layer is obtained. This analysis produces an expression for the temperature difference between the liquid and vapor phases which can be applied to developed cavity flows. Experimental data of cavity temperature depressions are correlated using this result. In addition, a theoretical estimate of the Nusselt number for the cavity is made using a turbulent boundary layer cavity model proposed by Brennen. The result agrees in part with empirically determined expressions for the cavity Nusselt number.

  1. Theoretical issues in Spheromak research

    SciTech Connect

    Cohen, R. H.; Hooper, E. B.; LoDestro, L. L.; Mattor, N.; Pearlstein, L. D.; Ryutov, D. D.

    1997-04-01

    This report summarizes the state of theoretical knowledge of several physics issues important to the spheromak. It was prepared as part of the preparation for the Sustained Spheromak Physics Experiment (SSPX), which addresses these goals: energy confinement and the physics which determines it; the physics of transition from a short-pulsed experiment, in which the equilibrium and stability are determined by a conducting wall (``flux conserver``) to one in which the equilibrium is supported by external coils. Physics is examined in this report in four important areas. The status of present theoretical understanding is reviewed, physics which needs to be addressed more fully is identified, and tools which are available or require more development are described. Specifically, the topics include: MHD equilibrium and design, review of MHD stability, spheromak dynamo, and edge plasma in spheromaks.

  2. Theoretical Problems in Materials Science

    NASA Technical Reports Server (NTRS)

    Langer, J. S.; Glicksman, M. E.

    1985-01-01

    Interactions between theoretical physics and material sciences to identify problems of common interest in which some of the powerful theoretical approaches developed for other branches of physics may be applied to problems in materials science are presented. A unique structure was identified in rapidly quenched Al-14% Mn. The material has long-range directed bonds with icosahedral symmetry which does not form a regular structure but instead forms an amorphous-like quasiperiodic structure. Finite volume fractions of second phase material is advanced and is coupled with nucleation theory to describe the formation and structure of precipitating phases in alloys. Application of the theory of pattern formation to the problem of dendrite formation is studied.

  3. Theoretical Advanced Study Institute: 2014

    SciTech Connect

    DeGrand, Thomas

    2016-08-17

    The Theoretical Advanced Study Institute (TASI) was held at the University of Colorado, Boulder, during June 2-27, 2014. The topic was "Journeys through the Precision Frontier: Amplitudes for Colliders." The organizers were Professors Lance Dixon (SLAC) and Frank Petriello (Northwestern and Argonne). There were fifty-one students. Nineteen lecturers gave sixty seventy-five minute lectures. A Proceedings was published. This TASI was unique for its large emphasis on methods for calculating amplitudes. This was embedded in a program describing recent theoretical and phenomenological developments in particle physics. Topics included introductions to the Standard Model, to QCD (both in a collider context and on the lattice), effective field theories, Higgs physics, neutrino interactions, an introduction to experimental techniques, and cosmology.

  4. Theoretical Issues in Software Engineering.

    DTIC Science & Technology

    1982-09-01

    large software projects. It has been less successful in acquiring a solid theoretical foundation for these methods. The software development process...justification save practice that has evolved for large , concur- rently processed programs. Furthermore, each phase needs formal description and analysis. The...Abstract B Me discipline of software engineering has transferred the common-sense methods of good programing and management to large software projects. It

  5. Migration, crisis and theoretical conflict.

    PubMed

    Bach, R L; Schraml, L A

    1982-01-01

    The nature of the distinction between the equilibrium and historical-structuralist positions on migration is examined. Theoretical and political differences in the two positions are considered both historically and in the context of the current global economic crisis. The proposal of Wood to focus on households as a strategy for integrating the two perspectives and for achieving a better understanding of migration and social change is discussed.

  6. Theoretical study of the C-H bond dissociation energy of acetylene

    NASA Technical Reports Server (NTRS)

    Taylor, Peter R.; Bauschlicher, Charles W., Jr.; Langhoff, Stephen R.

    1990-01-01

    The authors present a theoretical study of the convergence of the C-H bond dissociation energy (D sub o) of acetylene with respect to both the one- and n-particle spaces. Their best estimate for D sub o of 130.1 plus or minus 1.0 kcal/mole is slightly below previous theoretical estimates, but substantially above the value determined using Stark anticrossing spectroscopy that is asserted to be an upper bound.

  7. Cryptobiosis: a new theoretical perspective.

    PubMed

    Neuman, Yair

    2006-10-01

    The tardigrade is a microscopic creature that under environmental stress conditions undergoes cryptobiosis [Feofilova, E.P., 2003. Deceleration of vital activity as a universal biochemical mechanism ensuring adaptation of microorganisms to stress factors: A review. Appl. Biochem. Microbiol. 39, 1-18; Nelson, D.R., 2002. Current status of the tardigrada: Evolution and ecology. Integrative Comp. Biol. 42, 652-659]-a temporary metabolic depression-which is considered to be a third state between life and death [Clegg, J.S., 2001. Cryptobiosis-a peculiar state of biological organization. Comp. Biochem. Physiol. Part B 128, 613-624]. In contrast with death, cryptobiosis is a reversible state, and as soon as environmental conditions change, the tardigrade "returns to life." Cryptobiosis in general, and among the tardigrade in particular, is a phenomenon poorly understood [Guppy, M., 2004. The biochemistry of metabolic depression: a history of perceptions. Comp. Biochem. Physiol. Part B 139, 435-442; Schill, R.O., et al., 2004. Stress gene (hsp70) sequences and quantitative expression in Milensium tardigradum (Tardigrade) during active and cryptobiotic stages. J. Exp. Biol. 207, 1607-1613; Watanabe, M., et al., 2002. Mechanisn allowing an insect to survive complete dehydration and extreme temperatures. J. Exp. Biol. 205, 2799-2802; Wright, J.C., 2001. Cryptobiosis 300 years on from van Leuwenhoek: what have we learned about tardigrades? Zool. Anz. 240, 563-582]. Moreover, the ability of the tardigrade to bootstrap itself and to return to life seems paradoxical like the legendary Baron von Munchausen who pulled himself out of the swamp by grabbing his own hair. Two theoretical obstacles prevent us from advancing our knowledge of cryptobiosis. First, we lack appropriate theoretical understanding of reversible processes of biological computation in living systems. Second, we lack appropriate theoretical understanding of bootstrapping in living systems. In this short opinion

  8. Coverage-adjusted entropy estimation.

    PubMed

    Vu, Vincent Q; Yu, Bin; Kass, Robert E

    2007-09-20

    Data on 'neural coding' have frequently been analyzed using information-theoretic measures. These formulations involve the fundamental and generally difficult statistical problem of estimating entropy. We review briefly several methods that have been advanced to estimate entropy and highlight a method, the coverage-adjusted entropy estimator (CAE), due to Chao and Shen that appeared recently in the environmental statistics literature. This method begins with the elementary Horvitz-Thompson estimator, developed for sampling from a finite population, and adjusts for the potential new species that have not yet been observed in the sample-these become the new patterns or 'words' in a spike train that have not yet been observed. The adjustment is due to I. J. Good, and is called the Good-Turing coverage estimate. We provide a new empirical regularization derivation of the coverage-adjusted probability estimator, which shrinks the maximum likelihood estimate. We prove that the CAE is consistent and first-order optimal, with rate O(P)(1/log n), in the class of distributions with finite entropy variance and that, within the class of distributions with finite qth moment of the log-likelihood, the Good-Turing coverage estimate and the total probability of unobserved words converge at rate O(P)(1/(log n)(q)). We then provide a simulation study of the estimator with standard distributions and examples from neuronal data, where observations are dependent. The results show that, with a minor modification, the CAE performs much better than the MLE and is better than the best upper bound estimator, due to Paninski, when the number of possible words m is unknown or infinite.

  9. Theoretical studies of combustion dynamics

    SciTech Connect

    Bowman, J.M.

    1993-12-01

    The basic objectives of this research program are to develop and apply theoretical techniques to fundamental dynamical processes of importance in gas-phase combustion. There are two major areas currently supported by this grant. One is reactive scattering of diatom-diatom systems, and the other is the dynamics of complex formation and decay based on L{sup 2} methods. In all of these studies, the authors focus on systems that are of interest experimentally, and for which potential energy surfaces based, at least in part, on ab initio calculations are available.

  10. Theoretical Studies of Reaction Surfaces

    DTIC Science & Technology

    2007-11-02

    31 Aug 97 4. TITLE AND SUBTITLE 5 . FUNDING NUMBERS AASERT93 THEORETICAL STUDIES OF REACTION SURFACES F49620-93-1-0556 3484/XS 6. AUTHOR(S) 61103D DR...DUNCAN AVE ROOM B115 BOLLING AFB DC 20332- 8050 DR MICHAEL R. BERMAN 11. SUPPLEMENTARY NOTES 12a. DISTRIBUTION i AVAILABILITY STATEMENT Approved f or pill...reaction14 , and solvation of electrolytes1 5 . The EFP method described in the previous section has one drawback: the repulsive 3 potential relies on

  11. Theoretical Studies on Cluster Compounds

    NASA Astrophysics Data System (ADS)

    Lin, Zhenyang

    Available from UMI in association with The British Library. Requires signed TDF. The Thesis describes some theoretical studies on ligated and bare clusters. Chapter 1 gives a review of the two theoretical models, Tensor Surface Harmonic Theory (TSH) and Jellium Model, accounting for the electronic structures of ligated and bare clusters. The Polyhedral Skeletal Electron Pair Theory (PSEPT), which correlates the structures and electron counts (total number of valence electrons) of main group and transition metal ligated clusters, is briefly described. A structural jellium model is developed in Chapter 2 which accounts for the electronic structures of clusters using a crystal-field perturbation. The zero-order potential we derive is of central-field form, depends on the geometry of the cluster, and has a well-defined relationship to the full nuclear-electron potential. Qualitative arguments suggest that this potential produces different energy level orderings for clusters with a nucleus with large positive charge at the centre of the cluster. Analysis of the effects of the non-spherical perturbation on the spherical jellium shell structures leads to the conclusion that for a cluster with a closed shell electronic structure a high symmetry arrangement which is approximately or precisely close packed will be preferred. It also provides a basis for rationalising those structures of clusters with incomplete shell electronic configurations. In Chapter 3, the geometric conclusions derived in the structural jellium model are developed in more detail. The group theoretical consequences of the Tensor Surface Harmonic Theory are developed in Chapter 4 for (ML_2) _{rm n}, (ML_4) _{rm n} and (ML_5 ) _{rm n} clusters where either the xz and yz or x^2 -y^2 and xy components to L_sp{rm d}{pi } and L_sp{rm d} {delta} do not contribute equally to the bonding. The closed shell requirements for such clusters are defined and the orbital symmetry constraints pertaining to the

  12. Theoretical insights into interprofessional education.

    PubMed

    Hean, Sarah; Craddock, Deborah; Hammick, Marilyn

    2012-01-01

    This article argues for the need for theory in the practice of interprofessional education. It highlights the range of theories available to interprofessional educators and promotes the practical application of these to interprofessional learning and teaching. It summarises the AMEE Guides in Medical Education publication entitled Theoretical Insights into Interprofessional Education: AMEE Guide No. 62, where the practical application of three theories, social capital, social constructivism and a sociological perspective of interprofessional education are discussed in-depth through the lens of a case study. The key conclusions of these discussions are presented in this article.

  13. Some thoughts on theoretical physics

    NASA Astrophysics Data System (ADS)

    Tsallis, Constantino

    2004-12-01

    Some thoughts are presented on the inter-relation between beauty and truth in science in general and theoretical physics in particular. Some conjectural procedures that can be used to create new ideas, concepts and results are illustrated in both Boltzmann-Gibbs and nonextensive statistical mechanics. The sociological components of scientific progress and its unavoidable and benefic controversies are, mainly through existing literary texts, briefly addressed as well. Short essay based on the plenary talk given at the International Workshop on Trends and Perspectives in Extensive and Non-Extensive Statistical Mechanics, held in November 19-21, 2003, in Angra dos Reis, Brazil.

  14. Nonparametric Item Response Curve Estimation with Correction for Measurement Error

    ERIC Educational Resources Information Center

    Guo, Hongwen; Sinharay, Sandip

    2011-01-01

    Nonparametric or kernel regression estimation of item response curves (IRCs) is often used in item analysis in testing programs. These estimates are biased when the observed scores are used as the regressor because the observed scores are contaminated by measurement error. Accuracy of this estimation is a concern theoretically and operationally.…

  15. Theoretical perspectives on strange physics

    SciTech Connect

    Ellis, J.

    1983-04-01

    Kaons are heavy enough to have an interesting range of decay modes available to them, and light enough to be produced in sufficient numbers to explore rare modes with satisfying statistics. Kaons and their decays have provided at least two major breakthroughs in our knowledge of fundamental physics. They have revealed to us CP violation, and their lack of flavor-changing neutral interactions warned us to expect charm. In addition, K/sup 0/-anti K/sup 0/ mixing has provided us with one of our most elegant and sensitive laboratories for testing quantum mechanics. There is every reason to expect that future generations of kaon experiments with intense sources would add further to our knowledge of fundamental physics. This talk attempts to set future kaon experiments in a general theoretical context, and indicate how they may bear upon fundamental theoretical issues. A survey of different experiments which would be done with an Intense Medium Energy Source of Strangeness, including rare K decays, probes of the nature of CP isolation, ..mu.. decays, hyperon decays and neutrino physics is given. (WHK)

  16. Theoretical perspectives on narrative inquiry.

    PubMed

    Emden, C

    1998-04-01

    Narrative inquiry is gaining momentum in the field of nursing. As a research approach it does not have any single heritage of methodology and its practitioners draw upon diverse sources of influence. Central to all narrative inquiry however, is attention to the potential of stories to give meaning to people's lives, and the treatment of data as stories. This is the first of two papers on the topic and addresses the theoretical influences upon a particular narrative inquiry into nursing scholars and scholarship. The second paper, Conducting a narrative analysis, describes the actual narrative analysis as it was conducted in this same study. Together, the papers provide sufficient detail for others wishing to pursue a similar approach to do so, or to develop the ideas and procedures according to their own way of thinking. Within this first theoretical paper, perspectives from Jerome Bruner (1987) and Wade Roof (1993) are outlined. These relate especially to the notion of stories as 'imaginative constructions' and as 'cultural narratives' and as such, highlight the profound importance of stories as being individually and culturally meaningful. As well, perspectives on narrative inquiry from nursing literature are highlighted. Narrative inquiry in this instance lies within the broader context of phenomenology.

  17. Theoretical implications on ISABELLE physics

    SciTech Connect

    Wang, L L.C.

    1980-01-01

    A brief historical review of the development of understanding of the weak interaction and its final unification with electromagnetic theory is given first. Then the production cross sections of W/sup + -/ and Z/sup 0/ in hadronic scatterings are estimated; CVC, scaling, Drell-Yan model, structure functions and perturbative QCD, sigma/sub w/, sigma/sub z/, and production rate are aspects considered. Next, the detection of the Z/sup 0/ and W/sup + -/ in leptonic decay and hadronic decay is discussed, along with some detail features of the Drell-Yan model. Then an estimate of the onia production is given, and the Higgs boson and the technicolor pseudoscalar production are considered. In conclusion, anticipated work in these areas at ISABELLE is summarized. 48 references, 14 figures, 2 tables. (RWR)

  18. Peer support: a theoretical perspective.

    PubMed

    Mead, S; Hilton, D; Curtis, L

    2001-01-01

    This article offers one theoretical perspective of peer support and attempts to define the elements that, when reinforced through education and training, provide a new cultural context for healing and recovery. Persons labeled with psychiatric disability have become victims of social and cultural ostracism and consequently have developed a sense of self that reinforces the "patient" identity. Enabling members of peer support to understand the nature and impact of these cultural forces leads individuals and peer communities toward a capacity for personal, relational, and social change. It is our hope that consumers from all different types of programs (e.g. drop-in, social clubs, advocacy, support, outreach, respite), traditional providers, and policy makers will find this article helpful in stimulating dialogue about the role of peer programs in the development of a recovery based system.

  19. Theoretical studies of molecular collisions

    NASA Technical Reports Server (NTRS)

    Kouri, Donald J.

    1991-01-01

    The following subject areas are covered: (1) total integral reactive cross sections and vibrationally resolved reaction probabilities for F + H2 = HF + H; (2) a theoretical study of inelastic O + N2 collisions; (3) body frame close coupling wave packet approach to gas phase atom-rigit rotor inelastic collisions; (4) wave packet study of gas phase atom-rigit motor scattering; (5) the application of optical potentials for reactive scattering; (6) time dependent, three dimensional body frame quantal wave packet treatment of the H + H2 exchange reaction; (7) a time dependent wave packet approach to atom-diatom reactive collision probabilities; (8) time dependent wave packet for the complete determination of s-matrix elements for reactive molecular collisions in three dimensions; (9) a comparison of three time dependent wave packet methods for calculating electron-atom elastic scattering cross sections; and (10) a numerically exact full wave packet approach to molecule-surface scattering.

  20. Theoretical motions of hydrofoil systems

    NASA Technical Reports Server (NTRS)

    Imlay, Frederick H

    1948-01-01

    Results are presented of an investigation that has been undertaken to develop theoretical methods of treating the motions of hydrofoil systems and to determine some of the important parameters. Variations of parameters include three distributions of area between the hydrofoils, two rates of change of downwash angle with angle of attack, three depths of immersion, two dihedral angles, two rates of change of lift with immersion, three longitudinal hydrofoil spacings, two radii of gyration in pitching, and various horizontal and vertical locations of the center of gravity. Graphs are presented to show locations of the center of gravity for stable motion, values of the stability roots, and motions following the sudden application of a vertical force or a pitching moment to the hydrofoil system for numerous sets of values of the parameters.

  1. 'Impulsar': Experimental and Theoretical Investigations

    SciTech Connect

    Apollonov, V. V.

    2008-04-28

    The Objective of the 'Impulsar' project is to accomplish a circle of experimental, engineering and technological works on creation of a high efficiency laser rocket engine. The project includes many organizations of the rocket industry and Academy of Sciences of Russia. High repetition rate pulse-periodic CO{sub 2} laser system project for launching will be presented. Optical system for 15 MW laser energy delivery and optical matrix of laser engine receiver will by discussed as well. Basic characteristics of the laser-based engine will be compared with theoretical predictions and important stages of further technology implementation (low frequency resonance). Relying on a wide cooperation of different branches of science and industry organizations it is very possible to use the accumulated potential for launching of nano-vehicles during the upcoming 4-5 years.

  2. Theoretical Models of Generalized Quasispecies.

    PubMed

    Wagner, Nathaniel; Atsmon-Raz, Yoav; Ashkenasy, Gonen

    2016-01-01

    Theoretical modeling of quasispecies has progressed in several directions. In this chapter, we review the works of Emmanuel Tannenbaum, who, together with Eugene Shakhnovich at Harvard University and later with colleagues and students at Ben-Gurion University in Beersheva, implemented one of the more useful approaches, by progressively setting up various formulations for the quasispecies model and solving them analytically. Our review will focus on these papers that have explored new models, assumed the relevant mathematical approximations, and proceeded to analytically solve for the steady-state solutions and run stochastic simulations . When applicable, these models were related to real-life problems and situations, including changing environments, presence of chemical mutagens, evolution of cancer and tumor cells , mutations in Escherichia coli, stem cells , chromosomal instability (CIN), propagation of antibiotic drug resistance , dynamics of bacteria with plasmids , DNA proofreading mechanisms, and more.

  3. Theoretical aspects of WS₂ nanotube chemical unzipping.

    PubMed

    Kvashnin, D G; Antipina, L Yu; Sorokin, P B; Tenne, R; Golberg, D

    2014-07-21

    Theoretical analysis of experimental data on unzipping multilayered WS₂ nanotubes by consequent intercalation of lithium atoms and 1-octanethiol molecules [C. Nethravathi, et al., ACS Nano, 2013, 7, 7311] is presented. The radial expansion of the tube was described using continuum thin-walled cylinder approximation with parameters evaluated from ab initio calculations. Assuming that the attractive driving force of the 1-octanethiol molecule is its reaction with the intercalated Li ions ab initio calculations of a 1-octanethiol molecule bonding with Li(+) were carried out. In addition, the non-chemical interactions of the 1-octanethiol dipole with an array of positive point charges representing Li(+) were taken into account. Comparing between the energy gain from these interactions and the elastic strain energy of the nanotube allows us to evaluate a value for the tube wall deformation after the implantation of 1-octanethiol molecules. The ab initio molecular dynamics simulation confirmed our estimates and demonstrated that a strained WS₂ nanotube, with a decent concentration of 1-octanethiol molecules, should indeed be unzipped into the WS₂ nanoribbon.

  4. Estimating preselected and postselected ensembles

    SciTech Connect

    Massar, Serge; Popescu, Sandu

    2011-11-15

    In analogy with the usual quantum state-estimation problem, we introduce the problem of state estimation for a pre- and postselected ensemble. The problem has fundamental physical significance since, as argued by Y. Aharonov and collaborators, pre- and postselected ensembles are the most basic quantum ensembles. Two new features are shown to appear: (1) information is flowing to the measuring device both from the past and from the future; (2) because of the postselection, certain measurement outcomes can be forced never to occur. Due to these features, state estimation in such ensembles is dramatically different from the case of ordinary, preselected-only ensembles. We develop a general theoretical framework for studying this problem and illustrate it through several examples. We also prove general theorems establishing that information flowing from the future is closely related to, and in some cases equivalent to, the complex conjugate information flowing from the past. Finally, we illustrate our approach on examples involving covariant measurements on spin-1/2 particles. We emphasize that all state-estimation problems can be extended to the pre- and postselected situation. The present work thus lays the foundations of a much more general theory of quantum state estimation.

  5. Cost estimating methods for advanced space systems

    NASA Technical Reports Server (NTRS)

    Cyr, Kelley

    1988-01-01

    The development of parametric cost estimating methods for advanced space systems in the conceptual design phase is discussed. The process of identifying variables which drive cost and the relationship between weight and cost are discussed. A theoretical model of cost is developed and tested using a historical data base of research and development projects.

  6. Multivariate Density Estimation and Remote Sensing

    NASA Technical Reports Server (NTRS)

    Scott, D. W.

    1983-01-01

    Current efforts to develop methods and computer algorithms to effectively represent multivariate data commonly encountered in remote sensing applications are described. While this may involve scatter diagrams, multivariate representations of nonparametric probability density estimates are emphasized. The density function provides a useful graphical tool for looking at data and a useful theoretical tool for classification. This approach is called a thunderstorm data analysis.

  7. Studies in theoretical particle physics

    SciTech Connect

    Kaplan, D.B.

    1991-07-01

    This proposal focuses on research on three distinct areas of particle physics: (1) Nonperturbative QCD. I tend to continue work on analytic modelling of nonperturbative effects in the strong interactions. I have been investigating the theoretical connection between the nonrelativistic quark model and QCD. The primary motivation has been to understand the experimental observation of nonzero matrix elements involving current strange quarks in ordinary matter -- which in the quark model has no strange quark component. This has led to my present work on understanding constituent (quark model) quarks as collective excitations of QCD degrees of freedom. (2) Weak Scale Baryogenesis. A continuation of work on baryogenesis in the early universe from weak interactions. In particular, an investigation of baryogenesis occurring during the weak phase transition through anomalous baryon violating processes in the standard model of weak interactions. (3) Flavor and Compositeness. Further investigation of a new mechanism that I recently discovered for dynamical mass generation for fermions, which naturally leads to a family hierarchy structure. A discussion of recent past work is found in the next section, followed by an outline of the proposed research. A recent publication from each of these three areas is attached to this proposal.

  8. Research in Theoretical Particle Physics

    SciTech Connect

    Feldman, Hume A; Marfatia, Danny

    2014-09-24

    This document is the final report on activity supported under DOE Grant Number DE-FG02-13ER42024. The report covers the period July 15, 2013 – March 31, 2014. Faculty supported by the grant during the period were Danny Marfatia (1.0 FTE) and Hume Feldman (1% FTE). The grant partly supported University of Hawaii students, David Yaylali and Keita Fukushima, who are supervised by Jason Kumar. Both students are expected to graduate with Ph.D. degrees in 2014. Yaylali will be joining the University of Arizona theory group in Fall 2014 with a 3-year postdoctoral appointment under Keith Dienes. The group’s research covered topics subsumed under the Energy Frontier, the Intensity Frontier, and the Cosmic Frontier. Many theoretical results related to the Standard Model and models of new physics were published during the reporting period. The report contains brief project descriptions in Section 1. Sections 2 and 3 lists published and submitted work, respectively. Sections 4 and 5 summarize group activity including conferences, workshops and professional presentations.

  9. Theoretical investigations of plasma processes

    NASA Technical Reports Server (NTRS)

    Wilhelm, H. E.; Hong, S. H.

    1976-01-01

    System analyses are presented for electrically sustained, collision dominated plasma centrifuges, in which the plasma rotates under the influence of the Lorentz forces resulting from the interaction of the current density fields with an external magnetic field. It is shown that gas discharge centrifuges are technically feasible in which the plasma rotates at speeds up to 1 million cm/sec. The associated centrifugal forces produce a significant spatial isotope separation, which is somewhat perturbed in the viscous boundary layers at the centrifuge walls. The isotope separation effect is the more pronounced. The induced magnetic fields have negligible influence on the plasma rotation if the Hall coefficient is small. In the technical realization of collision dominated plasma centrifuges, a trade-off has to be made between power density and speeds of rotation. The diffusion of sputtered atoms to system surfaces of ion propulsion systems and the deposition of the atoms are treated theoretically by means of a simple model which permits an analytical solution. The problem leads to an inhomogeneous integral equation.

  10. Rethinking Theoretical Approaches to Stigma

    PubMed Central

    Martin, Jack K; Lang, Annie; Olafsdottir, Sigrun

    2008-01-01

    A resurgence of research and policy efforts on stigma both facilitates and forces a reconsideration of the levels and types of factors that shape reactions to persons with conditions that engender prejudice and discrimination. Focusing on the case of mental illness but drawing from theories and studies of stigma across the social sciences, we propose a framework that brings together theoretical insights from micro, meso and macro level research: Framework Integrating Normative Influences on Stigma (FINIS) starts with Goffman’s notion that understanding stigma requires a language of social relationships, but acknowledges that individuals do not come to social interaction devoid of affect and motivation. Further, all social interactions take place in a context in which organizations, media and larger cultures structure normative expectations which create the possibility of marking “difference”. Labelling theory, social network theory, the limited capacity model of media influence, the social psychology of prejudice and discrimination, and theories of the welfare state all contribute to an understanding of the complex web of expectations shaping stigma. FINIS offers the potential to build a broad-based scientific foundation based on understanding the effects of stigma on the lives of persons with mental illness, the resources devoted to the organizations and families who care for them, and policies and programs designed to combat stigma. We end by discussing the clear implications this framework holds for stigma reduction, even in the face of conflicting results. PMID:18436358

  11. Theoretical Transport Model for Tokamaks

    NASA Astrophysics Data System (ADS)

    Ghanem, Elsayed Mohammad

    In the present thesis work a theoretical transport model is suggested to study the anomalous transport of plasma particles and energy across the axisymmetric equilibrium toroidal magnetic flux surfaces in tokamaks. The model suggests a linear combination of two transport mechanisms; drift waves, which dominate the transport in the core region, and resistive ballooning modes, which dominate the transport in the edge region. The resulting unified model has been used in a predictive transport code to simulate the plasma transport in different tokamak experiments operating in both the ohmic heating phase and the low confinement mode (L-mode). For ohmic plasma, the model was used to study the saturation of energy confinement time at high plasma density. The effect of the resistive ballooning mode as a possible cause of the saturation phenomena has been investigated together with the effect of the ion temperature gradient mode. For the low confinement mode plasmas, the study has emphasized on using the model to obtain a scaling law for the energy confinement time with the various plasma parameters compared to the scaling laws that are derived based on fitting the experimental data.

  12. Variance estimation for stratified propensity score estimators.

    PubMed

    Williamson, E J; Morley, R; Lucas, A; Carpenter, J R

    2012-07-10

    Propensity score methods are increasingly used to estimate the effect of a treatment or exposure on an outcome in non-randomised studies. We focus on one such method, stratification on the propensity score, comparing it with the method of inverse-probability weighting by the propensity score. The propensity score--the conditional probability of receiving the treatment given observed covariates--is usually an unknown probability estimated from the data. Estimators for the variance of treatment effect estimates typically used in practice, however, do not take into account that the propensity score itself has been estimated from the data. By deriving the asymptotic marginal variance of the stratified estimate of treatment effect, correctly taking into account the estimation of the propensity score, we show that routinely used variance estimators are likely to produce confidence intervals that are too conservative when the propensity score model includes variables that predict (cause) the outcome, but only weakly predict the treatment. In contrast, a comparison with the analogous marginal variance for the inverse probability weighted (IPW) estimator shows that routinely used variance estimators for the IPW estimator are likely to produce confidence intervals that are almost always too conservative. Because exact calculation of the asymptotic marginal variance is likely to be complex, particularly for the stratified estimator, we suggest that bootstrap estimates of variance should be used in practice.

  13. Theoretical aspects of calcium signaling

    NASA Astrophysics Data System (ADS)

    Pencea, Corneliu Stefan

    2001-08-01

    Experiments investigating intracellular calcium dynamics have revealed that calcium signals differentially affect a variety of intracellular processes, from fertilization and cell development and differentiation to subsequent cellular activity, ending with cell death. As an intracellular messenger, calcium transmits information within and between cells, thus regulating their activity. To control such a variety of processes, calcium signals have to be very flexible and also precisely regulated. The cell uses a calcium signaling ``toolkit'', where calcium ions can act in different contexts of space, amplitude and time. For different tasks, the cell selects the particular signal, or combination of signals, that triggers the appropriate physiological response. The physical foundations of such a versatile cellular signaling toolkit involving calcium are not completely understood, despite important experimental and theoretical progress made recently. The declared goal of this work is to investigate physical mechanisms on which the propagation of differential signals can be based. The dynamics of calcium near a cluster of inositol trisphosphate (IP3) activated calcium channels has been investigated analytically and numerically. Our work has demonstrated that clusters of different IP3 receptors can show similar bistable behavior, but differ in both the transient and long term dynamics. We have also investigated the conditions under which a calcium signal propagates between a pair of localized stores. We have shown that the propagation of the signal across a random distribution of such stores shows a percolation transition manifested in the shape of the wave front. More importantly, our work indicates that specific distribution of stores can be interpreted as calcium circuits that can perform important signal analyzing task, from unidirectional propagation and coincidence detection to a complete set of logic gates. We believe that phenomena like the ones described are

  14. The Future of Theoretical Cosmology

    NASA Astrophysics Data System (ADS)

    Carroll, Sean

    2006-04-01

    Over the course of the twentieth century, we went from knowing essentially nothing about the large-scale structure of the universe to knowing quite a bit: that it is expanding from a Big Bang, that it is approximately 14 billion years old, that there are perhaps 100 billion galaxies spread uniformly throughout the observable universe. Theory has progressed along with observation: general relativity now forms the basis for all our discussions about cosmology, and advances in quantum field theory and particle physics have allowed us to talk sensibly about nucleosynthesis, dark matter, and primordial inflation. In the twenty-first century, two obvious candidates stand out: the nature of the dark sector, and the beginning of time. With 95% of the energy density of the universe apparently residing in dark matter and dark energy, the issues to be addressed by theorists span a wide range: What are these substances? Do they interact, with each other or with ordinary matter? Can they be detected in the lab? Why do they have the abundances we observe? Do they really exist, or are we being fooled by the behavior of gravity on large scales? Meanwhile, we will continue to stretch our theoretical models further into the past. Did the dark matter decouple from thermal equilibrium at early times? Do phase transitions in the early universe produce observable gravitational-wave backgrounds? Did inflation occur, and if so what were the dynamics of the inflaton field? Why did inflation start? Are there distinct domains within the universe, possibly with different properties? Can quantum gravity resolve the initial singularity, and connect us with a pre-Big-Bang phase? Why is the early universe different from the late universe -- what is the origin of time asymmetry? It's impossible to predict what the answers to any of these issues will turn out to be, but we can be confident that we won't be running out of interesting questions.

  15. Estimating potential evapotranspiration with improved radiation estimation

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Potential evapotranspiration (PET) is of great importance to estimation of surface energy budget and water balance calculation. The accurate estimation of PET will facilitate efficient irrigation scheduling, drainage design, and other agricultural and meteorological applications. However, accuracy o...

  16. Ensemble estimators for multivariate entropy estimation.

    PubMed

    Sricharan, Kumar; Wei, Dennis; Hero, Alfred O

    2013-07-01

    The problem of estimation of density functionals like entropy and mutual information has received much attention in the statistics and information theory communities. A large class of estimators of functionals of the probability density suffer from the curse of dimensionality, wherein the mean squared error (MSE) decays increasingly slowly as a function of the sample size T as the dimension d of the samples increases. In particular, the rate is often glacially slow of order O(T(-)(γ)(/)(d) ), where γ > 0 is a rate parameter. Examples of such estimators include kernel density estimators, k-nearest neighbor (k-NN) density estimators, k-NN entropy estimators, intrinsic dimension estimators and other examples. In this paper, we propose a weighted affine combination of an ensemble of such estimators, where optimal weights can be chosen such that the weighted estimator converges at a much faster dimension invariant rate of O(T(-1)). Furthermore, we show that these optimal weights can be determined by solving a convex optimization problem which can be performed offline and does not require training data. We illustrate the superior performance of our weighted estimator for two important applications: (i) estimating the Panter-Dite distortion-rate factor and (ii) estimating the Shannon entropy for testing the probability distribution of a random sample.

  17. Cosmological parameter estimation using Particle Swarm Optimization

    NASA Astrophysics Data System (ADS)

    Prasad, J.; Souradeep, T.

    2014-03-01

    Constraining parameters of a theoretical model from observational data is an important exercise in cosmology. There are many theoretically motivated models, which demand greater number of cosmological parameters than the standard model of cosmology uses, and make the problem of parameter estimation challenging. It is a common practice to employ Bayesian formalism for parameter estimation for which, in general, likelihood surface is probed. For the standard cosmological model with six parameters, likelihood surface is quite smooth and does not have local maxima, and sampling based methods like Markov Chain Monte Carlo (MCMC) method are quite successful. However, when there are a large number of parameters or the likelihood surface is not smooth, other methods may be more effective. In this paper, we have demonstrated application of another method inspired from artificial intelligence, called Particle Swarm Optimization (PSO) for estimating cosmological parameters from Cosmic Microwave Background (CMB) data taken from the WMAP satellite.

  18. Theoretical study of transition-metal ions bound to benzene

    NASA Technical Reports Server (NTRS)

    Bauschlicher, Charles W., Jr.; Partridge, Harry; Langhoff, Stephen R.

    1992-01-01

    Theoretical binding energies are reported for all first-row and selected second-row transition metal ions (M+) bound to benzene. The calculations employ basis sets of at least double-zeta plus polarization quality and account for electron correlation using the modified coupled-pair functional method. While the bending is predominantly electrostatic, the binding energies are significantly increased by electron correlation, because the donation from the metal d orbitals to the benzene pi* orbitals is not well described at the self-consistent-field level. The uncertainties in the computed binding energies are estimated to be about 5 kcal/mol. Although the calculated and experimental binding energies generally agree to within their combined uncertainties, it is likely that the true binding energies lie in the lower portion of the experimental range. This is supported by the very good agreement between the theoretical and recent experimental binding energies for AgC6H6(+).

  19. Information-theoretical noninvasive damage detection in bridge structures.

    PubMed

    Sudu Ambegedara, Amila; Sun, Jie; Janoyan, Kerop; Bollt, Erik

    2016-11-01

    Damage detection of mechanical structures such as bridges is an important research problem in civil engineering. Using spatially distributed sensor time series data collected from a recent experiment on a local bridge in Upper State New York, we study noninvasive damage detection using information-theoretical methods. Several findings are in order. First, the time series data, which represent accelerations measured at the sensors, more closely follow Laplace distribution than normal distribution, allowing us to develop parameter estimators for various information-theoretic measures such as entropy and mutual information. Second, as damage is introduced by the removal of bolts of the first diaphragm connection, the interaction between spatially nearby sensors as measured by mutual information becomes weaker, suggesting that the bridge is "loosened." Finally, using a proposed optimal mutual information interaction procedure to prune away indirect interactions, we found that the primary direction of interaction or influence aligns with the traffic direction on the bridge even after damaging the bridge.

  20. Theoretical Analysis of Positional Uncertainty in Direct Georeferencing

    NASA Astrophysics Data System (ADS)

    Coskun Kiraci, Ali; Toz, Gonul

    2016-10-01

    GNSS/INS system composed of Global Navigation Satellite System and Inertial Navigation System together can provide orientation parameters directly by the observations collected during the flight. Thus orientation parameters can be obtained by GNSS/INS integration process without any need for aero triangulation after the flight. In general, positional uncertainty can be estimated with known coordinates of Ground Control Points (GCP) which require field works such as marker construction and GNSS measurement leading additional cost to the project. Here the question arises what should be the theoretical uncertainty of point coordinates depending on the uncertainties of orientation parameters. In this study the contribution of each orientation parameter on positional uncertainty is examined and theoretical positional uncertainty is computed without GCP measurement for direct georeferencing using a graphical user interface developed in MATLAB.

  1. Information-theoretical noninvasive damage detection in bridge structures

    NASA Astrophysics Data System (ADS)

    Sudu Ambegedara, Amila; Sun, Jie; Janoyan, Kerop; Bollt, Erik

    2016-11-01

    Damage detection of mechanical structures such as bridges is an important research problem in civil engineering. Using spatially distributed sensor time series data collected from a recent experiment on a local bridge in Upper State New York, we study noninvasive damage detection using information-theoretical methods. Several findings are in order. First, the time series data, which represent accelerations measured at the sensors, more closely follow Laplace distribution than normal distribution, allowing us to develop parameter estimators for various information-theoretic measures such as entropy and mutual information. Second, as damage is introduced by the removal of bolts of the first diaphragm connection, the interaction between spatially nearby sensors as measured by mutual information becomes weaker, suggesting that the bridge is "loosened." Finally, using a proposed optimal mutual information interaction procedure to prune away indirect interactions, we found that the primary direction of interaction or influence aligns with the traffic direction on the bridge even after damaging the bridge.

  2. Earthquake probabilities: theoretical assessments and reality

    NASA Astrophysics Data System (ADS)

    Kossobokov, V. G.

    2013-12-01

    It is of common knowledge that earthquakes are complex phenomena which classification and sizing remain serious problems of the contemporary seismology. In general, their frequency-magnitude distribution exhibit power law scaling. This scaling differs significantly when different time and/or space domains are considered. At the scale of a particular earthquake rupture zone the frequency of similar size events is usually estimated to be about once in several hundred years. Evidently, contemporary seismology does not possess enough reported instrumental data for any reliable quantification of an earthquake probability at a given place of expected event. Regretfully, most of the state-of-the-art theoretical approaches to assess probability of seismic events are based on trivial (e.g. Poisson, periodic, etc) or, conversely, delicately-designed (e.g. STEP, ETAS, etc) models of earthquake sequences. Some of these models are evidently erroneous, some can be rejected by the existing statistics, and some are hardly testable in our life-time. Nevertheless such probabilistic counts including seismic hazard assessment and earthquake forecasting when used on practice eventually mislead to scientifically groundless advices communicated to decision makers and inappropriate decisions. As a result, the population of seismic regions continues facing unexpected risk and losses. The international project Global Earthquake Model (GEM) is on the wrong track, if it continues to base seismic risk estimates on the standard, mainly probabilistic, methodology to assess seismic hazard. It is generally accepted that earthquakes are infrequent, low-probability events. However, they keep occurring at earthquake-prone areas with 100% certainty. Given the expectation of seismic event once per hundred years, the daily probability of occurrence on a certain date may range from 0 to 100% depending on a choice of probability space (which is yet unknown and, therefore, made by a subjective lucky chance

  3. Cost-estimating relationships for space programs

    NASA Technical Reports Server (NTRS)

    Mandell, Humboldt C., Jr.

    1992-01-01

    Cost-estimating relationships (CERs) are defined and discussed as they relate to the estimation of theoretical costs for space programs. The paper primarily addresses CERs based on analogous relationships between physical and performance parameters to estimate future costs. Analytical estimation principles are reviewed examining the sources of errors in cost models, and the use of CERs is shown to be affected by organizational culture. Two paradigms for cost estimation are set forth: (1) the Rand paradigm for single-culture single-system methods; and (2) the Price paradigms that incorporate a set of cultural variables. For space programs that are potentially subject to even small cultural changes, the Price paradigms are argued to be more effective. The derivation and use of accurate CERs is important for developing effective cost models to analyze the potential of a given space program.

  4. Comparison of experimental with theoretical total-pressure loss in parallel-walled turbojet combustors

    NASA Technical Reports Server (NTRS)

    Dittrich, Ralph T

    1957-01-01

    An experimental investigation of combustor total-pressure loss was undertaken to confirm previous theoretical analyses of effects of geometric and flow variables and of heat addition. The results indicate that a reasonable estimate of cold-flow total-pressure-loss coefficient may be obtained from the theoretical analyses. Calculated total-pressure loss due to heat addition agreed with experimental data only when there was no flame ejection from the liner at the upstream air-entry holes.

  5. Theoretical spectra of floppy molecules

    NASA Astrophysics Data System (ADS)

    Chen, Hua

    2000-09-01

    Detailed studies of the vibrational dynamics of floppy molecules are presented. Six-D bound-state calculations of the vibrations of rigid water dimer based on several anisotropic site potentials (ASP) are presented. A new sequential diagonalization truncation approach was used to diagonalize the angular part of the Hamiltonian. Symmetrized angular basis and a potential optimized discrete variable representation for intermonomer distance coordinate were used in the calculations. The converged results differ significantly from the results presented by Leforestier et al. [J. Chem. Phys. 106 , 8527 (1997)]. It was demonstrated that ASP-S potential yields more accurate tunneling splittings than other ASP potentials used. Fully coupled 4D quantum mechanical calculations were performed for carbon dioxide dimer using the potential energy surface given by Bukowski et al [J. Chem. Phys., 110, 3785 (1999)]. The intermolecular vibrational frequencies and symmetry adapted force constants were estimated and compared with experiments. The inter-conversion tunneling dynamics was studied using the calculated virtual tunneling splittings. Symmetrized Radau coordinates and the sequential diagonalization truncation approach were formulated for acetylene. A 6D calculation was performed with 5 DVR points for each stretch coordinate, and an angular basis that is capable of converging the angular part of the Hamiltonian to 30 cm-1 for internal energies up to 14000 cm-1. The probability at vinylidene configuration were evaluated. It was found that the eigenstates begin to extend to vinylidene configuration from about 10000 cm-1, and the ra, coordinate is closely related to the vibrational dynamics at high energy. Finally, a direct product DVR was defined for coupled angular momentum operators, and the SDT approach were formulated. They were applied in solving the angular part of the Hamiltonian for carbon dioxide dimer problem. The results show the method is capable of giving very accurate

  6. Relativistic Navigation: A Theoretical Foundation

    NASA Technical Reports Server (NTRS)

    Turyshev, Slava G.

    1996-01-01

    We present a theoretical foundation for relativistic astronomical measurements in curved space-time. In particular, we discuss a new iterative approach for describing the dynamics of an isolated astronomical N-body system in metric theories of gravity. To do this, we generalize the Fock-Chandrasekhar method of the weak-field and slow-motion approximation (WFSMA) and develop a theory of relativistic reference frames (RF's) for a gravitationally bounded many-extended-body problem. In any proper RF constructed in the immediate vicinity of an arbitrary body, the N-body solutions of the gravitational field equations are formally presented as a sum of the Riemann-flat inertial space-time, the gravitational field generated by the body itself, the unperturbed solutions for each body in the system transformed to the coordinates of this proper RF, and the gravitational interaction term. We develop the basic concept of a general WFSMA theory of the celestial RF's applicable to a wide class of metric theories of gravity and an arbitrary model of matter distribution. We apply the proposed method to general relativity. Celestial bodies are described using a perfect fluid model; as such, they possess any number of internal mass and current multipole moments that explicitly characterize their internal structures. The obtained relativistic corrections to the geodetic equations of motion arise because of a coupling of the bodies' multiple moments to the surrounding gravitational field. The resulting relativistic transformations between the different RF's extend the Poincare group to the motion of deformable self-gravitating bodies. Within the present accuracy of astronomical measurements we discuss the properties of the Fermi-normal-like proper RF that is defined in the immediate vicinity of the extended compact bodies. We further generalize the proposed approximation method and include two Eddington parameters (gamma, Beta). This generalized approach was used to derive the

  7. Computing theoretical rates of part C eligibility based on developmental delays.

    PubMed

    Rosenberg, Steven A; Ellison, Misoo C; Fast, Bruce; Robinson, Cordelia C; Lazar, Radu

    2013-02-01

    Part C early intervention is a nationwide program that serves infants and toddlers who have developmental delays. This article presents a methodology for computing a theoretical estimate of the proportion of children who are likely to be eligible for Part C services based on delays in any of the 5 developmental domains (cognitive, motor, communication, social-emotional and adaptive) that are assessed to determine eligibility. Rates of developmental delays were estimated from a multivariate normal cumulative distribution function. This approach calculates theoretical rates of occurrence for conditions that are defined in terms of standard deviations from the mean on several variables that are approximately normally distributed. Evidence is presented to suggest that the procedures described produce accurate estimates of rates of child developmental delays. The methodology used in this study provides a useful tool for computing theoretical rates of occurrence of developmental delays that make children candidates for early intervention.

  8. Price and cost estimation

    NASA Technical Reports Server (NTRS)

    Stewart, R. D.

    1979-01-01

    Price and Cost Estimating Program (PACE II) was developed to prepare man-hour and material cost estimates. Versatile and flexible tool significantly reduces computation time and errors and reduces typing and reproduction time involved in preparation of cost estimates.

  9. Beauty baryon decays: a theoretical overview

    NASA Astrophysics Data System (ADS)

    Wang, Yu-Ming

    2014-11-01

    I overview the theoretical status and recent progress on the calculations of beauty baryon decays focusing on the QCD aspects of the exclusive semi-leptonic Λb → plμ decay at large recoil and theoretical challenges of radiative and electro-weak penguin decays Λb → Λγ,Λl+l-.

  10. Theoretical Orientation of British Infant School Teachers.

    ERIC Educational Resources Information Center

    Miller, Janet A.

    A study examined the theoretical orientation of infant school and infant department teachers in England. The Theoretical Orientation to Reading Profile (TORP) was used to determine the teacher's orientation to reading instruction. TORP applies a Likert scale response system to a series of statements about how reading should be taught. Subjects…

  11. Electron-Transfer Reactions of Organometallic and Coordination Compounds in the Absence of Solvent: Experimental Results and Theoretical Approaches

    DTIC Science & Technology

    1988-07-07

    OI C FILE COF AD-A 197 086 ). ... ENTA TION PAGE AD A 197 086 b. RESTRICTIVE MARKINGS L~ a. SECURITY CLASSIFICATION AUTH 3. DISTRIBUTION/ AV...and theoretically estimated factors are compared to experiment for a typical metallocene, ferrocene . (’ 20. DISTRIBUItuN/AVAILABILITY OF ABSTRACT 21...is examined, and theoretically estimated factors are ,’or compared to experiment for a typical metallocene, ferrocene . I L) I t *~6% ~ * ’." .-- 4

  12. Estimating tail probabilities

    SciTech Connect

    Carr, D.B.; Tolley, H.D.

    1982-12-01

    This paper investigates procedures for univariate nonparametric estimation of tail probabilities. Extrapolated values for tail probabilities beyond the data are also obtained based on the shape of the density in the tail. Several estimators which use exponential weighting are described. These are compared in a Monte Carlo study to nonweighted estimators, to the empirical cdf, to an integrated kernel, to a Fourier series estimate, to a penalized likelihood estimate and a maximum likelihood estimate. Selected weighted estimators are shown to compare favorably to many of these standard estimators for the sampling distributions investigated.

  13. Theoretical studies in polynucleotide biophysics

    NASA Astrophysics Data System (ADS)

    Lubensky, David Koslan

    This thesis investigates the physics of the polynucleotides DNA and RNA, with an emphasis on theory relevant to single molecule experiments. An introductory chapter reviews some facts about these polymers and gives an overview of important experimental techniques. Motivated by attempts to develop new technologies for DNA sequencing and related assays, we turn in the second chapter to the dynamics of polynucleotides threaded through narrow pores. We show that there is a range of polymer lengths in which the system is approximately translationally invariant, and we develop a coarse-grained description of this regime. We also introduce a more microscopic model that provides a physically reasonable scenario in which, as in experiments, the polymer's speed depends sensitively on its chemical composition. Finally, we point out that the experimental distribution of polymer transit times is much broader than expected from simple estimates, and speculate on why this might be. The third chapter gives a brief account, focusing on behavior averaged over many random sequences, of work on the mechanical pulling apart of the two strands of double-stranded DNA (dsDNA). When the pulling force is increased to a critical value (typically of order 10 pN), an "unzipping" transition occurs. For random DNA sequences with short-ranged correlations, we obtain exact results for the number of monomers liberated, including the critical behavior at the transition. The final chapter expands upon these results on the unzipping transition, providing more details of our disorder-averaged calculations and tackling the more experimentally accessible problem of the unzipping of a single dsDNA molecule. As the applied force approaches the critical value, a given dsDNA unravels in a series of discrete, sequence dependent steps that allow it to reach successively deeper energy minima. Plots of extension versus force thus take the striking form of a series of plateaus separated by sharp jumps. Similar

  14. THE SUCCESSIVE LINEAR ESTIMATOR: A REVISIT. (R827114)

    EPA Science Inventory

    This paper examines the theoretical basis of the successive linear estimator (SLE) that has been developed for the inverse problem in subsurface hydrology. We show that the SLE algorithm is a non-linear iterative estimator to the inverse problem. The weights used in the SLE al...

  15. Association of older women's limb circumferences and muscle mass as estimated with bioelectrical impedance.

    PubMed

    Bohannon, Richard W; Chu, Johnson; Steffl, Michal

    2016-03-01

    [Purpose] The purpose of this study was to describe the relationship between three practical measures used to characterize muscle mass: mid-arm circumference, maximum calf circumference, and muscle mass index determined using bioimpedance analysis. [Subjects and Methods] Thirty-eight ambulatory women residing in a senior center (mean age, 83 years) participated in this cross-sectional study. Their mid-arm circumference and maximum calf circumference were measured bilaterally and they all underwent bioimpedance analysis. Relationships were examined by using Pearson (r) correlations, Cronbach's alpha, and factor analysis. [Results] Circumferential measures correlated significantly with one another (r = 0.745-0.968) and with the muscle mass index determined with bioimpedance analysis (r = 0.480-0.628). The Cronbach's alpha for the measures was 0.905. Factor analysis confirmed that all of the measures were reflective of a common construct. [Conclusion] On the basis of their correlations with one another and the muscle mass index determined with bioimpedance analysis, circumferential measures of the mid-arm or calf may be considered crude indicators of reduced muscle mass.

  16. Modeling an Application's Theoretical Minimum and Average Transactional Response Times

    SciTech Connect

    Paiz, Mary Rose

    2015-04-01

    The theoretical minimum transactional response time of an application serves as a ba- sis for the expected response time. The lower threshold for the minimum response time represents the minimum amount of time that the application should take to complete a transaction. Knowing the lower threshold is beneficial in detecting anomalies that are re- sults of unsuccessful transactions. On the converse, when an application's response time falls above an upper threshold, there is likely an anomaly in the application that is causing unusual performance issues in the transaction. This report explains how the non-stationary Generalized Extreme Value distribution is used to estimate the lower threshold of an ap- plication's daily minimum transactional response time. It also explains how the seasonal Autoregressive Integrated Moving Average time series model is used to estimate the upper threshold for an application's average transactional response time.

  17. Bias reduction in the estimation of mutual information.

    PubMed

    Zhu, Jie; Bellanger, Jean-Jacques; Shu, Huazhong; Yang, Chunfeng; Le Bouquin Jeannès, Régine

    2014-11-01

    This paper deals with the control of bias estimation when estimating mutual information from a nonparametric approach. We focus on continuously distributed random data and the estimators we developed are based on a nonparametric k-nearest-neighbor approach for arbitrary metrics. Using a multidimensional Taylor series expansion, a general relationship between the estimation error bias and the neighboring size for the plug-in entropy estimator is established without any assumption on the data for two different norms. The theoretical analysis based on the maximum norm developed coincides with the experimental results drawn from numerical tests made by Kraskov et al. [Phys. Rev. E 69, 066138 (2004)PLEEE81539-375510.1103/PhysRevE.69.066138]. To further validate the novel relation, a weighted linear combination of distinct mutual information estimators is proposed and, using simulated signals, the comparison of different strategies allows for corroborating the theoretical analysis.

  18. Bias reduction in the estimation of mutual information

    NASA Astrophysics Data System (ADS)

    Zhu, Jie; Bellanger, Jean-Jacques; Shu, Huazhong; Yang, Chunfeng; Le Bouquin Jeannès, Régine

    2014-11-01

    This paper deals with the control of bias estimation when estimating mutual information from a nonparametric approach. We focus on continuously distributed random data and the estimators we developed are based on a nonparametric k -nearest-neighbor approach for arbitrary metrics. Using a multidimensional Taylor series expansion, a general relationship between the estimation error bias and the neighboring size for the plug-in entropy estimator is established without any assumption on the data for two different norms. The theoretical analysis based on the maximum norm developed coincides with the experimental results drawn from numerical tests made by Kraskov et al. [Phys. Rev. E 69, 066138 (2004), 10.1103/PhysRevE.69.066138]. To further validate the novel relation, a weighted linear combination of distinct mutual information estimators is proposed and, using simulated signals, the comparison of different strategies allows for corroborating the theoretical analysis.

  19. Estimating avian population size using Bowden's estimator

    USGS Publications Warehouse

    Diefenbach, D.R.

    2009-01-01

    Avian researchers often uniquely mark birds, and multiple estimators could be used to estimate population size using individually identified birds. However, most estimators of population size require that all sightings of marked birds be uniquely identified, and many assume homogeneous detection probabilities. Bowden's estimator can incorporate sightings of marked birds that are not uniquely identified and relax assumptions required of other estimators. I used computer simulation to evaluate the performance of Bowden's estimator for situations likely to be encountered in bird studies. When the assumptions of the estimator were met, abundance and variance estimates and confidence-interval coverage were accurate. However, precision was poor for small population sizes (N ??? 50) unless a large percentage of the population was marked (>75%) and multiple (???8) sighting surveys were conducted. If additional birds are marked after sighting surveys begin, it is important to initially mark a large proportion of the population (pm ??? 0.5 if N ??? 100 or pm > 0.1 if N ??? 250) and minimize sightings in which birds are not uniquely identified; otherwise, most population estimates will be overestimated by >10%. Bowden's estimator can be useful for avian studies because birds can be resighted multiple times during a single survey, not all sightings of marked birds have to uniquely identify individuals, detection probabilities among birds can vary, and the complete study area does not have to be surveyed. I provide computer code for use with pilot data to design mark-resight surveys to meet desired precision for abundance estimates. ?? 2009 by The American Ornithologists' Union. All rights reserved.

  20. Surety theoretics: The forest or the trees?

    SciTech Connect

    Senglaub, M.

    1997-10-30

    Periodically one needs to re-examine the objectives and the efforts associated with a field of study. In the case of surety which comprises, safety, security and reliability one needs to be sure that theoretical efforts support the needs of systems and design engineers in satisfying stakeholder requirements. The current focus in the surety areas does not appear to address the theoretical foundations needed by the systems engineer. Examination of papers and abstracts demonstrate significant effort along the lines of thermal hydraulics, chemistry, structural response, control theory, etc. which are analytical disciplines which provide support for a surety theoretic but do not constitute a theoretic. The representations currently employed, fault trees etc., define static representations of a system, not the dynamic representation characteristic of response in abnormal, hostile or under degrading conditions. Current methodologies would require a semi-infinite set of scenarios to be examined before a system could be certified as satisfying a surety requirement. The elements that are required of a surety theoretic must include: (1) a dynamic representation of the system; (2) the ability to automatically identify terminal states of the system; and (3) determine the probabilities of specified terminal states under dynamic conditions. This paper examines the requirements of a surety theoretic that will support the efforts of the design and development engineer. Speculations then follow on technologies that might provide the theoretical and support foundations needed by the systems engineering community to form a robust surety analysis and design environment.

  1. Theoretical models for trace gas preconcentrators

    NASA Astrophysics Data System (ADS)

    Kim, Jihyun

    2013-11-01

    Muntz et al., in 2004 and 2011, had attempted to describe theoretical models about the shape of a main flow channel and the concentration ratio of trace gas for a Continuous Flow-Through Trace Gas Preconcentrator by concepts of net flux and mass flow rate respectively. The possibilities were suggested to obtain theoretical models for the preconcentrator even through they were not satisfied with experimental results, because the theoretical models were only considered for free molecular flow. In this study, new theoretical models based on net flux and mass flow rate have been applied for each regime; free molecular flow, transition flow, and hydrodynamic flow. There are comprehensive numerical models to describe entire regimes with the new theoretical models induced by mass flow rate, but the new theoretical models induced by net flux can be only obtained for the hydrodynamic flow. The numerical predictions were compared with existing experimental results of the prototype of the preconcentrator. The numerical predictions of hydrodynamic and transition flows by mass flow rate were close to the experimental results, but other cases were different to the experimental data. Nevertheless, the theoretical models can provide the possibility to develop the theory of preconcentrator.

  2. Theoretical approximations and experimental extinction coefficients of biopharmaceuticals.

    PubMed

    Miranda-Hernández, Mariana P; Valle-González, Elba R; Ferreira-Gómez, David; Pérez, Néstor O; Flores-Ortiz, Luis F; Medina-Rivero, Emilio

    2016-02-01

    UV spectrophotometric measurement is a widely accepted and standardized routine analysis for quantitation of highly purified proteins; however, the reliability of the results strictly depends on the accuracy of the employed extinction coefficients. In this work, an experimental estimation of the differential refractive index (dn/dc), based on dry weight measurements, was performed in order to determine accurate extinction coefficients for four biotherapeutic proteins and one synthetic copolymer after separation in a size-exclusion ultra-performance liquid chromatograph coupled to an ultraviolet, multiangle light scattering and refractive index (SE-UPLC-UV-MALS-RI) multidetection system. The results showed small deviations with respect to theoretical values, calculated from the specific amino acid sequences, for all the studied immunoglobulins. Nevertheless, for proteins like etanercept and glatiramer acetate, several considerations, such as glycan content, partial specific volume, polarizability, and higher order structure, should be considered to properly calculate theoretical extinction coefficient values. Herein, these values were assessed with simple approximations. The precision of the experimentally obtained extinction coefficients, and its convergence towards the theoretical values, makes them useful for characterization and comparability exercises. Also, these values provide insight into the absorbance and scattering properties of the evaluated proteins. Overall, this methodology is capable of providing accurate extinction coefficients useful for development studies.

  3. Theoretical relationship between elastic wave velocity and electrical resistivity

    NASA Astrophysics Data System (ADS)

    Lee, Jong-Sub; Yoon, Hyung-Koo

    2015-05-01

    Elastic wave velocity and electrical resistivity have been commonly applied to estimate stratum structures and obtain subsurface soil design parameters. Both elastic wave velocity and electrical resistivity are related to the void ratio; the objective of this study is therefore to suggest a theoretical relationship between the two physical parameters. Gassmann theory and Archie's equation are applied to propose a new theoretical equation, which relates the compressional wave velocity to shear wave velocity and electrical resistivity. The piezo disk element (PDE) and bender element (BE) are used to measure the compressional and shear wave velocities, respectively. In addition, the electrical resistivity is obtained by using the electrical resistivity probe (ERP). The elastic wave velocity and electrical resistivity are recorded in several types of soils including sand, silty sand, silty clay, silt, and clay-sand mixture. The appropriate input parameters are determined based on the error norm in order to increase the reliability of the proposed relationship. The predicted compressional wave velocities from the shear wave velocity and electrical resistivity are similar to the measured compressional velocities. This study demonstrates that the new theoretical relationship may be effectively used to predict the unknown geophysical property from the measured values.

  4. Theoretical Description of the Fission Process

    SciTech Connect

    Witold Nazarewicz

    2009-10-25

    Advanced theoretical methods and high-performance computers may finally unlock the secrets of nuclear fission, a fundamental nuclear decay that is of great relevance to society. In this work, we studied the phenomenon of spontaneous fission using the symmetry-unrestricted nuclear density functional theory (DFT). Our results show that many observed properties of fissioning nuclei can be explained in terms of pathways in multidimensional collective space corresponding to different geometries of fission products. From the calculated collective potential and collective mass, we estimated spontaneous fission half-lives, and good agreement with experimental data was found. We also predicted a new phenomenon of trimodal spontaneous fission for some transfermium isotopes. Our calculations demonstrate that fission barriers of excited superheavy nuclei vary rapidly with particle number, pointing to the importance of shell effects even at large excitation energies. The results are consistent with recent experiments where superheavy elements were created by bombarding an actinide target with 48-calcium; yet even at high excitation energies, sizable fission barriers remained. Not only does this reveal clues about the conditions for creating new elements, it also provides a wider context for understanding other types of fission. Understanding of the fission process is crucial for many areas of science and technology. Fission governs existence of many transuranium elements, including the predicted long-lived superheavy species. In nuclear astrophysics, fission influences the formation of heavy elements on the final stages of the r-process in a very high neutron density environment. Fission applications are numerous. Improved understanding of the fission process will enable scientists to enhance the safety and reliability of the nation’s nuclear stockpile and nuclear reactors. The deployment of a fleet of safe and efficient advanced reactors, which will also minimize radiotoxic

  5. Topics in global convergence of density estimates

    NASA Technical Reports Server (NTRS)

    Devroye, L.

    1982-01-01

    The problem of estimating a density f on R sup d from a sample Xz(1),...,X(n) of independent identically distributed random vectors is critically examined, and some recent results in the field are reviewed. The following statements are qualified: (1) For any sequence of density estimates f(n), any arbitrary slow rate of convergence to 0 is possible for E(integral/f(n)-fl); (2) In theoretical comparisons of density estimates, integral/f(n)-f/ should be used and not integral/f(n)-f/sup p, p 1; and (3) For most reasonable nonparametric density estimates, either there is convergence of integral/f(n)-f/ (and then the convergence is in the strongest possible sense for all f), or there is no convergence (even in the weakest possible sense for a single f). There is no intermediate situation.

  6. Eighteenth annual West Coast theoretical chemistry conference

    SciTech Connect

    1997-05-01

    Abstracts are presented from the eighteenth annual west coast theoretical chemistry conference. Topics include molecular simulations; quasiclassical simulations of reactions; photodissociation reactions; molecular dynamics;interface studies; electronic structure; and semiclassical methods of reactive systems.

  7. A theoretical approach to measuring pilot workload

    NASA Technical Reports Server (NTRS)

    Kantowitz, B. H.

    1984-01-01

    Theoretical assumptions used by researchers in the area of attention, with emphasis upon errors and inconsistent assumptions used by some researchers were studied. Two GAT experiments, two laboratory studies and one field experiment were conducted.

  8. Theoretical Orientation and Attitudes toward Women

    ERIC Educational Resources Information Center

    Davenport, Judith; Reims, Nancy

    1978-01-01

    This research study explored possible associations between the theoretical orientations of clinicians and their traditional or contemporary attitudes toward women's roles. Of all the variables investigated, however, only the clinicians' sex had an effect on their attitudes. (Author)

  9. The Future of Theoretical Physics and Cosmology

    NASA Astrophysics Data System (ADS)

    Gibbons, G. W.; Shellard, E. P. S.; Rankin, S. J.

    2003-11-01

    Based on lectures given in honor of Stephen Hawking's 60th birthday, this book comprises contributions from the world's leading theoretical physicists. Popular lectures progress to a critical evaluation of more advanced subjects in modern cosmology and theoretical physics. Topics covered include the origin of the universe, warped spacetime, cosmological singularities, quantum gravity, black holes, string theory, quantum cosmology and inflation. The volume provides a fascinating overview of the variety of subjects to which Stephen Hawking has contributed.

  10. Work Domain Analysis: Theoretical Concepts and Methodology

    DTIC Science & Technology

    2005-02-01

    method to elicit expert knowledge: A case study in the methodology of cognitive task analysis. Human Factors, 40, 254-276. Itoh, J., Sakuma, A...Work Domain Analysis: Theoretical Concepts and Methodology Neelam Naikar, Robyn Hopcroft, and Anna Moylan Air Operations...theoretical and methodological approach for work domain analysis (WDA), the first phase of cognitive work analysis. The report: (1) addresses a number of

  11. Fuel Burn Estimation Using Real Track Data

    NASA Technical Reports Server (NTRS)

    Chatterji, Gano B.

    2011-01-01

    A procedure for estimating fuel burned based on actual flight track data, and drag and fuel-flow models is described. The procedure consists of estimating aircraft and wind states, lift, drag and thrust. Fuel-flow for jet aircraft is determined in terms of thrust, true airspeed and altitude as prescribed by the Base of Aircraft Data fuel-flow model. This paper provides a theoretical foundation for computing fuel-flow with most of the information derived from actual flight data. The procedure does not require an explicit model of thrust and calibrated airspeed/Mach profile which are typically needed for trajectory synthesis. To validate the fuel computation method, flight test data provided by the Federal Aviation Administration were processed. Results from this method show that fuel consumed can be estimated within 1% of the actual fuel consumed in the flight test. Next, fuel consumption was estimated with simplified lift and thrust models. Results show negligible difference with respect to the full model without simplifications. An iterative takeoff weight estimation procedure is described for estimating fuel consumption, when takeoff weight is unavailable, and for establishing fuel consumption uncertainty bounds. Finally, the suitability of using radar-based position information for fuel estimation is examined. It is shown that fuel usage could be estimated within 5.4% of the actual value using positions reported in the Airline Situation Display to Industry data with simplified models and iterative takeoff weight computation.

  12. Application of maximum-likelihood estimation in optical coherence tomography for nanometer-class thickness estimation

    NASA Astrophysics Data System (ADS)

    Huang, Jinxin; Yuan, Qun; Tankam, Patrice; Clarkson, Eric; Kupinski, Matthew; Hindman, Holly B.; Aquavella, James V.; Rolland, Jannick P.

    2015-03-01

    In biophotonics imaging, one important and quantitative task is layer-thickness estimation. In this study, we investigate the approach of combining optical coherence tomography and a maximum-likelihood (ML) estimator for layer thickness estimation in the context of tear film imaging. The motivation of this study is to extend our understanding of tear film dynamics, which is the prerequisite to advance the management of Dry Eye Disease, through the simultaneous estimation of the thickness of the tear film lipid and aqueous layers. The estimator takes into account the different statistical processes associated with the imaging chain. We theoretically investigated the impact of key system parameters, such as the axial point spread functions (PSF) and various sources of noise on measurement uncertainty. Simulations show that an OCT system with a 1 μm axial PSF (FWHM) allows unbiased estimates down to nanometers with nanometer precision. In implementation, we built a customized Fourier domain OCT system that operates in the 600 to 1000 nm spectral window and achieves 0.93 micron axial PSF in corneal epithelium. We then validated the theoretical framework with physical phantoms made of custom optical coatings, with layer thicknesses from tens of nanometers to microns. Results demonstrate unbiased nanometer-class thickness estimates in three different physical phantoms.

  13. Spectral procedures for estimating crop biomass

    SciTech Connect

    Wanjura, D.F.; Hatfield, J.L.

    1985-05-01

    Spectral reflectance was measured semi-weekly and used to estimate leaf area and plant dry weight accumulation in cotton, soybeans, and sunflower. Integration of spectral crop growth cycle curves explained up to 95 and 91%, respectively, of the variation in cotton lint yield and dry weight. A theoretical relationship for dry weight accumulation, in which only intercepted radiation or intercepted radiation and solar energy to biomass conversion efficiency were spectrally estimated, explained 99 and 96%, respectively, of the observed plant dry weight variation of the three crops. These results demonstrate the feasibility of predicting crop biomass from spectral measurements collected frequently during the growing season. 15 references.

  14. Cost estimating methods for advanced space systems

    NASA Technical Reports Server (NTRS)

    Cyr, Kelley

    1988-01-01

    Parametric cost estimating methods for space systems in the conceptual design phase are developed. The approach is to identify variables that drive cost such as weight, quantity, development culture, design inheritance, and time. The relationship between weight and cost is examined in detail. A theoretical model of cost is developed and tested statistically against a historical data base of major research and development programs. It is concluded that the technique presented is sound, but that it must be refined in order to produce acceptable cost estimates.

  15. Theoretical performance assessment and empirical analysis of super-resolution under unknown affine sensor motion.

    PubMed

    Thelen, Brian J; Valenzuela, John R; LeBlanc, Joel W

    2016-04-01

    This paper deals with super-resolution (SR) processing and associated theoretical performance assessment for under-sampled video data collected from a moving imaging platform with unknown motion and assuming a relatively flat scene. This general scenario requires joint estimation of the high-resolution image and the parameters that determine a projective transform that relates the collected frames to one another. A quantitative assessment of the variance in the random error as achieved through a joint-estimation approach (e.g., SR image reconstruction and motion estimation) is carried out via the general framework of M-estimators and asymptotic statistics. This approach provides a performance measure on estimating the fine-resolution scene when there is a lack of perspective information and represents a significant advancement over previous work that considered only the more specific scenario of mis-registration. A succinct overview of the theoretical framework is presented along with some specific results on the approximate random error for the case of unknown translation and affine motions. A comparison is given between the approximated random error and that actually achieved by an M-estimator approach to the joint-estimation problem. These results provide insight on the reduction in SR reconstruction accuracy when jointly estimating unknown inter-frame affine motion.

  16. Theoretical nuclear database for high-energy, heavy-ion (HZE) transport

    NASA Technical Reports Server (NTRS)

    Townsend, L. W.; Cucinotta, F. A.; Wilson, J. W.

    1995-01-01

    Theoretical methods for estimating high-energy, heavy-ion (HZE) particle absorption and fragmentation cross-sections are described and compared with available experimental data. Differences between theory and experiment range from several percent for absorption cross-sections up to about 25%-50% for fragmentation cross-sections.

  17. Theoretical and Statistical Derivation of a Screener for the Behavioral Assessment of Executive Functions in Children

    ERIC Educational Resources Information Center

    Garcia-Barrera, Mauricio A.; Kamphaus, Randy W.; Bandalos, Deborah

    2011-01-01

    The problem of valid measurement of psychological constructs remains an impediment to scientific progress, and the measurement of executive functions is not an exception. This study examined the statistical and theoretical derivation of a behavioral screener for the estimation of executive functions in children from the well-established Behavior…

  18. Aircraft parameter estimation

    NASA Technical Reports Server (NTRS)

    Iliff, Kenneth W.

    1987-01-01

    The aircraft parameter estimation problem is used to illustrate the utility of parameter estimation, which applies to many engineering and scientific fields. Maximum likelihood estimation has been used to extract stability and control derivatives from flight data for many years. This paper presents some of the basic concepts of aircraft parameter estimation and briefly surveys the literature in the field. The maximum likelihood estimator is discussed, and the basic concepts of minimization and estimation are examined for a simple simulated aircraft example. The cost functions that are to be minimized during estimation are defined and discussed. Graphic representations of the cost functions are given to illustrate the minimization process. Finally, the basic concepts are generalized, and estimation from flight data is discussed. Some of the major conclusions for the simulated example are also developed for the analysis of flight data from the F-14, highly maneuverable aircraft technology (HiMAT), and space shuttle vehicles.

  19. Information geometric density estimation

    NASA Astrophysics Data System (ADS)

    Sun, Ke; Marchand-Maillet, Stéphane

    2015-01-01

    We investigate kernel density estimation where the kernel function varies from point to point. Density estimation in the input space means to find a set of coordinates on a statistical manifold. This novel perspective helps to combine efforts from information geometry and machine learning to spawn a family of density estimators. We present example models with simulations. We discuss the principle and theory of such density estimation.

  20. Fuel Burn Estimation Model

    NASA Technical Reports Server (NTRS)

    Chatterji, Gano

    2011-01-01

    Conclusions: Validated the fuel estimation procedure using flight test data. A good fuel model can be created if weight and fuel data are available. Error in assumed takeoff weight results in similar amount of error in the fuel estimate. Fuel estimation error bounds can be determined.

  1. Making Connections with Estimation.

    ERIC Educational Resources Information Center

    Lobato, Joanne E.

    1993-01-01

    Describes four methods to structure estimation activities that enable students to make connections between their understanding of numbers and extensions of those concepts to estimating. Presents activities that connect estimation with other curricular areas, other mathematical topics, and real-world applications. (MDH)

  2. Price Estimation Guidelines

    NASA Technical Reports Server (NTRS)

    Chamberlain, R. G.; Aster, R. W.; Firnett, P. J.; Miller, M. A.

    1985-01-01

    Improved Price Estimation Guidelines, IPEG4, program provides comparatively simple, yet relatively accurate estimate of price of manufactured product. IPEG4 processes user supplied input data to determine estimate of price per unit of production. Input data include equipment cost, space required, labor cost, materials and supplies cost, utility expenses, and production volume on industry wide or process wide basis.

  3. Computational and theoretical methods for protein folding.

    PubMed

    Compiani, Mario; Capriotti, Emidio

    2013-12-03

    A computational approach is essential whenever the complexity of the process under study is such that direct theoretical or experimental approaches are not viable. This is the case for protein folding, for which a significant amount of data are being collected. This paper reports on the essential role of in silico methods and the unprecedented interplay of computational and theoretical approaches, which is a defining point of the interdisciplinary investigations of the protein folding process. Besides giving an overview of the available computational methods and tools, we argue that computation plays not merely an ancillary role but has a more constructive function in that computational work may precede theory and experiments. More precisely, computation can provide the primary conceptual clues to inspire subsequent theoretical and experimental work even in a case where no preexisting evidence or theoretical frameworks are available. This is cogently manifested in the application of machine learning methods to come to grips with the folding dynamics. These close relationships suggested complementing the review of computational methods within the appropriate theoretical context to provide a self-contained outlook of the basic concepts that have converged into a unified description of folding and have grown in a synergic relationship with their computational counterpart. Finally, the advantages and limitations of current computational methodologies are discussed to show how the smart analysis of large amounts of data and the development of more effective algorithms can improve our understanding of protein folding.

  4. Sibutramine characterization and solubility, a theoretical study

    NASA Astrophysics Data System (ADS)

    Aceves-Hernández, Juan M.; Nicolás Vázquez, Inés; Hinojosa-Torres, Jaime; Penieres Carrillo, Guillermo; Arroyo Razo, Gabriel; Miranda Ruvalcaba, René

    2013-04-01

    Solubility data from sibutramine (SBA) in a family of alcohols were obtained at different temperatures. Sibutramine was characterized by using thermal analysis and X-ray diffraction technique. Solubility data were obtained by the saturation method. The van't Hoff equation was used to obtain the theoretical solubility values and the ideal solvent activity coefficient. No polymorphic phenomena were found from the X-ray diffraction analysis, even though this compound is a racemic mixture of (+) and (-) enantiomers. Theoretical calculations showed that the polarisable continuum model was able to reproduce the solubility and stability of sibutramine molecule in gas phase, water and a family of alcohols at B3LYP/6-311++G (d,p) level of theory. Dielectric constant, dipolar moment and solubility in water values as physical parameters were used in those theoretical calculations for explaining that behavior. Experimental and theoretical results were compared and good agreement was obtained. Sibutramine solubility increased from methanol to 1-octanol in theoretical and experimental results.

  5. Using SEQUEST with Theoretically Complete Sequence Databases

    NASA Astrophysics Data System (ADS)

    Sadygov, Rovshan G.

    2015-11-01

    SEQUEST has long been used to identify peptides/proteins from their tandem mass spectra and protein sequence databases. The algorithm has proven to be hugely successful for its sensitivity and specificity in identifying peptides/proteins, the sequences of which are present in the protein sequence databases. In this work, we report on work that attempts a new use for the algorithm by applying it to search a complete list of theoretically possible peptides, a de novo-like sequencing. We used freely available mass spectral data and determined a number of unique peptides as identified by SEQUEST. Using masses of these peptides and the mass accuracy of 0.001 Da, we have created a database of all theoretically possible peptide sequences corresponding to the precursor masses. We used our recently developed algorithm for determining all amino acid compositions corresponding to a mass interval, and used a lexicographic ordering to generate theoretical sequences from the compositions. The newly generated theoretical database was many-fold more complex than the original protein sequence database. We used SEQUEST to search and identify the best matches to the spectra from all theoretically possible peptide sequences. We found that SEQUEST cross-correlation score ranked the correct peptide match among the top sequence matches. The results testify to the high specificity of SEQUEST when combined with the high mass accuracy for intact peptides.

  6. Towards general information theoretical representations of database problems

    SciTech Connect

    Joslyn, C.

    1997-06-01

    General database systems are described from the General Systems Theoretical (GST) framework. In this context traditional information theoretical (statistical) and general information theoretical (fuzzy measure and set theoretical, possibilistic, and random set theoretical) representations are derived. A preliminary formal framework is introduced.

  7. Modeling of Closed-Die Forging for Estimating Forging Load

    NASA Astrophysics Data System (ADS)

    Sheth, Debashish; Das, Santanu; Chatterjee, Avik; Bhattacharya, Anirban

    2017-02-01

    Closed die forging is one common metal forming process used for making a range of products. Enough load is to exert on the billet for deforming the material. This forging load is dependent on work material property and frictional characteristics of the work material with the punch and die. Several researchers worked on estimation of forging load for specific products under different process variables. Experimental data on deformation resistance and friction were used to calculate the load. In this work, theoretical estimation of forging load is made to compare this value with that obtained through LS-DYNA model facilitating the finite element analysis. Theoretical work uses slab method to assess forging load for an axi-symmetric upsetting job made of lead. Theoretical forging load estimate shows slightly higher value than the experimental one; however, simulation shows quite close matching with experimental forging load, indicating possibility of wide use of this simulation software.

  8. The carcinogenic risk of some organic vapors indoors: A theoretical survey

    NASA Astrophysics Data System (ADS)

    Tancrède, M.; Wilson, R.; Zeise, L.; Crouch, E. A. C.

    This exploratory report examines the risk of selected organic air pollutants measured in homes in the United States and the Netherlands. After several theoretical assumptions, estimates are made for the carcinogenic potency of each chemical; combined with the exposure measurements these give estimates of cancer risk. These estimates are compared with risks of these same pollutants outdoors and in drinking water and also with other well known indoor air pollutants: cigarette smoke, radon gas and formaldehyde. These comparisons indicate priorities for action. Some suggestions are made for future studies.

  9. Moire interferometry near the theoretical limit.

    PubMed

    Weissman, E M; Post, D

    1982-05-01

    The theoretical upper limit of moire interferometry is approached as the reference grating pitch approaches lambda/2 and its frequency approaches 2/lambda. This work demonstrates the method at 97.6% of the theoretical limit. A virtual reference grating of 4000 lines/mm (101,600 lines/in.) was used in conjunction with a phase type reflection grating of half of that frequency on the specimen. Sensitivity was 0.25 microm/fringe (9.8 microin./fringe). In-plane displacement fringes of excellent definition were obtained throughout the 76 x 51-mm (3 x 2-in.) field of view. They were very closely packed, exhibiting a maximum fringe density of 24 fringes/mm (610 fringes/in.). Effectiveness of moire interferometry near the theoretical limit was proved.

  10. Helicopter impulsive noise: Theoretical and experimental status

    NASA Technical Reports Server (NTRS)

    Schmitz, F. H.; Yu, Y. H.

    1983-01-01

    The theoretical and experimental status of helicopter impulsive noise is reviewed. The two major source mechanisms of helicopter impulsive noise are addressed: high-speed impulsive noise and blade-vortex interaction impulsive noise. A thorough physical explanation of both generating mechanism is presented together with model and full-scale measurements of the phenomena. Current theoretical prediction methods are compared with experimental findings of isolated rotor tests. The noise generating mechanism of high speed impulsive noise are fairly well understood - theory and experiment compare nicely over Mach number ranges typical of today's helicopters. For the case of blade-vortex interaction noise, understanding of noise generating mechanisms and theoretical comparison with experiment are less satisfactory. Several methods for improving theory-experiment are suggested.

  11. Theoretical dissociation energies for ionic molecules

    NASA Technical Reports Server (NTRS)

    Langhoff, S. R.; Bauschlicher, C. W., Jr.; Partridge, H.

    1986-01-01

    Ab initio calculations at the self-consistent-field and singles plus doubles configuration-interaction level are used to determine accurate spectroscopic parameters for most of the alkali and alkaline-earth fluorides, chlorides, oxides, sulfides, hydroxides, and isocyanides. Numerical Hartree-Fock (NHF) calculations are performed on selected systems to ensure that the extended Slater basis sets employed for the diatomic systems are near the Hartree-Fock limit. Extended Gaussian basis sets of at least triple-zeta plus double polarization equality are employed for the triatomic system. With this model, correlation effects are relatively small, but invariably increase the theoretical dissociation energies. The importance of correlating the electrons on both the anion and the metal is discussed. The theoretical dissociation energies are critically compared with the literature to rule out disparate experimental values. Theoretical (sup 2)Pi - (sup 2)Sigma (sup +) energy separations are presented for the alkali oxides and sulfides.

  12. Preliminary estimates of electrical generating capacity of slim holes--a theoretical approach

    SciTech Connect

    Pritchett, John W.

    1995-01-26

    The feasibility of using small geothermal generators (< 1 MWe) for off-grid electrical power in remote areas or for rural electrification in developing nations would be enhanced if drilling costs could be reduced. This paper examines the electrical generating capacity of fluids which can be produced from typical slim holes (six-inch diameter or less), both by binary techniques (with downhole pumps) and, for hotter reservoir fluids, by conventional spontaneous-discharge flash-steam methods. Depending mainly on reservoir temperature, electrical capacities from a few hundred kilowatts to over one megawatt per slim hole appear to be possible.

  13. Integrated Theoretical, Computational, and Experimental Studies for Transition Estimation and Control

    DTIC Science & Technology

    2014-06-03

    nozzle exit) was developed to aid in porting the VENOM diagnostic to high-enthalpy impulse tunnels. Measurements were also made in the supersonic high...Colonius T, Fedorov AV. 2009. Alternate designs of ultrasonic absorptive coatings for hypersonic boundary layer control. AIAA Pap. No. 2009-4217 51. Craig

  14. Theoretical Estimates of Reaction Observables vis-a-vis Modern Experiments

    DTIC Science & Technology

    2006-02-08

    Aoiz F J, Banares L, DMello M J, Herrero V J, Saez Rabanos V and Wyatt R E 1995 Science 269 207-10 [91] Schnieder L, Seekamp-Rahn K, Wrede E and Welge...K H 1997 J. Chem. Phys. 107 6175-95 [92] Wrede E, Schnieder L, Welge K H, Aoiz F J, Banares L and Herrero V J 1997 Chem. Phys. Lett. 265 129-36 [93...Wrede E, Schnieder L, Welge K H, Aoiz F J, Banares L, Herrero V J, Martnez-Haya B and Saez Rabanos V 1997 J. Chem. Phys. 106 7862-4 [94] Wrede E

  15. Conceptual Challenges in Coordinating Theoretical and Data-Centered Estimates of Probability

    ERIC Educational Resources Information Center

    Konold, Cliff; Madden, Sandra; Pollatsek, Alexander; Pfannkuch, Maxine; Wild, Chris; Ziedins, Ilze; Finzer, William; Horton, Nicholas J.; Kazak, Sibel

    2011-01-01

    A core component of informal statistical inference is the recognition that judgments based on sample data are inherently uncertain. This implies that instruction aimed at developing informal inference needs to foster basic probabilistic reasoning. In this article, we analyze and critique the now-common practice of introducing students to both…

  16. A theoretical approach for estimation of ultimate size of bimetallic nanocomposites synthesized in microemulsion systems

    NASA Astrophysics Data System (ADS)

    Salabat, Alireza; Saydi, Hassan

    2012-12-01

    In this research a new idea for prediction of ultimate sizes of bimetallic nanocomposites synthesized in water-in-oil microemulsion system is proposed. In this method, by modifying Tabor Winterton approximation equation, an effective Hamaker constant was introduced. This effective Hamaker constant was applied in the van der Waals attractive interaction energy. The obtained effective van der Waals interaction energy was used as attractive contribution in the total interaction energy. The modified interaction energy was applied successfully to predict some bimetallic nanoparticles, at different mass fraction, synthesized in microemulsion system of dioctyl sodium sulfosuccinate (AOT)/isooctane.

  17. Silicate impact-vapor condensate on the Moon: Theoretical estimates versus geochemical data

    NASA Astrophysics Data System (ADS)

    Svetsov, Vladimir V.; Shuvalov, Valery V.

    2016-01-01

    We numerically simulated the impacts of asteroids and comets on the Moon in order to calculate the amount of condensate that can be formed after the impacts and compare the results with data for lunar samples. Using available equations of state for quartz and dunite, we have determined pressure and density behind shock waves in these materials for particle velocities from 4 to 20 km/s and obtained release adiabats from various points on the Hugoniot curves to very low pressures. For shock waves with particle velocities behind the front below 8 km/s the release adiabats intersect the liquid branch of the two-phase curve and, during the following expansion, the liquid material vaporizes and does not condense, forming a two-phase mixture of melt and vapor. The condensate can appear during expansion of material compressed by a shock with higher (>8 km/s) velocities. Using our hydrocode SOVA, we have conducted numerical simulations of the impacts of spherical quartz, dunite, and water-ice projectiles into targets of the same materials. Impact velocities were 15-25 km/s for stony projectiles and 20-70 km/s for icy impactors, and impact angles were 45°and 90° to the target surface. Along with the masses of condensates we calculated the masses of vaporized and melted material. Upon the impact of a projectile consisting of dunite into a target of quartz at a speed of 20 km/s at an angle of 45°, vaporized and melted masses of the target are equal to 1.6 and 11 in units of projectile mass, respectively, and the mass of condensate is 0.19. Vaporized and condensed masses of the projectile are 0.16 and 0.02, more than 80% of the projectile mass is melted. The calculated ratio of vaporized to melted mass proved to be on the order of 0.1. However, we calculated that, at impact velocities below 20 km/s, the condensate mass is only a small fraction of the vaporized and melted masses and, consequently, the major part of vapor disperses in vacuum in the form of separate molecules or molecular clusters. At an impact velocity of 15 km/s, the abundance of silicate condensates relative to melt is 0.001-0.0001, in agreement with data from lunar samples. Should the observed condensate abundances be representative, the velocities of major asteroid impacts on the Moon could not substantially exceed 20 km/s. Comet impacts at the same velocities produce much smaller amounts of vapor condensate because the low densities of cometary material induce lower shock pressures in the target.

  18. THEORETICAL ESTIMATES OF TWO-POINT SHEAR CORRELATION FUNCTIONS USING TANGLED MAGNETIC FIELDS

    SciTech Connect

    Pandey, Kanhaiya L.; Sethi, Shiv K.

    2012-03-20

    The existence of primordial magnetic fields can induce matter perturbations with additional power at small scales as compared to the usual {Lambda}CDM model. We study its implication within the context of a two-point shear correlation function from gravitational lensing. We show that a primordial magnetic field can leave its imprints on the shear correlation function at angular scales {approx}< a few arcminutes. The results are compared with CFHTLS data, which yield some of the strongest known constraints on the parameters (strength and spectral index) of the primordial magnetic field. We also discuss the possibility of detecting sub-nano Gauss fields using future missions such as SNAP.

  19. Novae a theoretical and observational study

    NASA Astrophysics Data System (ADS)

    Soraisam, Monika D.

    2016-02-01

    In this thesis, we present studies relating to novae that include both theoretical and ob- servational aspects. Being hosted by accreting white dwarfs (WDs), they have drawn attention in the context of the supernova Ia (SN Ia) progenitor problem. In the case of the nova explosion, the WD host is not disrupted. Instead, it continues to supply energy, even after the optical outbust, via stable nuclear burning of the remnant hydrogen envelope that survived the outburst. Accordingly, nova emission progresses toward the harder part of the electromagnetic spectrum, where it lasts longer than in the optical regime. As a consequence, novae are found to constitute the majority of the observed supersoft X-ray sources (SSSs). This is particularly well established for the galaxy M31. For high mass accretion rates in the unstable nuclear burning regime (or nova regime), there is evidence that significant mass accumulation by the WD is possible. This paved the way for SN Ia progenitor models in the single degenerate (SD) scenario involving novae. Based on the statistics of novae in M31, which is the most frequently used target for nova surveys, we investigate the role that novae may play in producing SNe Ia. Using multicycle nova evolution models and the observationally inferred nova rate in M31, we estimate the maximal SN Ia rate that novae can produce, assuming that all of the involved WDs reach the Chandrasekhar mass. Comparing this rate to the observationally inferred SN Ia rate for M31 constrains the contribution of the nova channel to the SN Ia rate to 2-7%. Additionally, we demonstrate that a more powerful diagnostic can be obtained from statistics of fast novae, which are characterized by decline times t2 10 days. Most novae resulting from a typical SD SN Ia progenitor accreting in the nova regime are fast. Specifically, as the WD in the nova grows in mass, it produces novae more frequently and with decreasing decline times. We therefore investigate how efficiently fast

  20. An improved theoretical model of acoustic agglomeration

    SciTech Connect

    Song, L. ); Koopmann, G.H. . Center for Acoustics and Vibration); Hoffmann, T.L. )

    1994-04-01

    An improved theoretical model is developed to describe the acoustic agglomeration of particles entrained in a gas medium. The improvements to the present theories are twofold: first, wave scattering is included in the orthokinetic interaction of particles and second, hydrodynamic interaction, shown to be an important agglomeration mechanism for certain operation conditions, is incorporated into the model. The influence of orthokinetic and hydrodynamic interactions introduce associated convergent velocities that cause particles to approach each other and collide. The convergent velocities are related with an acoustic agglomeration frequency function (AAFF) through a semi-statistical method. This function is the key parameter for the theoretical simulation of acoustic agglomeration.

  1. Local, nonlocal quantumness and information theoretic measures

    NASA Astrophysics Data System (ADS)

    Agrawal, Pankaj; Sazim, Sk; Chakrabarty, Indranil; Pati, Arun K.

    2016-08-01

    It has been suggested that there may exist quantum correlations that go beyond entanglement. The existence of such correlations can be revealed by information theoretic quantities such as quantum discord, but not by the conventional measures of entanglement. We argue that a state displays quantumness, that can be of local and nonlocal origin. Information theoretic measures not only characterize the nonlocal quantumness, but also the local quantumness, such as the “local superposition”. This can be a reason, why such measures are nonzero, when there is no entanglement. We consider a generalized version of the Werner state to demonstrate the interplay of local quantumness, nonlocal quantumness and classical mixedness of a state.

  2. Theoretical studies of chemical reaction dynamics

    SciTech Connect

    Schatz, G.C.

    1993-12-01

    This collaborative program with the Theoretical Chemistry Group at Argonne involves theoretical studies of gas phase chemical reactions and related energy transfer and photodissociation processes. Many of the reactions studied are of direct relevance to combustion; others are selected they provide important examples of special dynamical processes, or are of relevance to experimental measurements. Both classical trajectory and quantum reactive scattering methods are used for these studies, and the types of information determined range from thermal rate constants to state to state differential cross sections.

  3. Theoretical ecology as etiological from the start.

    PubMed

    Donhauser, Justin

    2016-12-01

    The world's leading environmental advisory institutions look to ecological theory and research as an objective guide for policy and resource management decision-making. In addition to the theoretical and broadly philosophical merits of doing so, it is therefore practically significant to clear up confusions about ecology's conceptual foundations and to clarify the basic workings of inferential methods used in the science. Through discussion of key moments in the genesis of the theoretical branch of ecology, this essay elucidates a general heuristic role of teleological metaphors in ecological research and defuses certain enduring confusions about work in ecology.

  4. Biology is more theoretical than physics.

    PubMed

    Gunawardena, Jeremy

    2013-06-01

    The word "theory" is used in at least two senses--to denote a body of widely accepted laws or principles, as in "Darwinian theory" or "quantum theory," and to suggest a speculative hypothesis, often relying on mathematical analysis, that has not been experimentally confirmed. It is often said that there is no place for the second kind of theory in biology and that biology is not theoretical but based on interpretation of data. Here, ideas from a previous essay are expanded upon to suggest, to the contrary, that the second kind of theory has always played a critical role and that biology, therefore, is a good deal more theoretical than physics.

  5. Theoretical Chemistry At NASA Ames Research Center

    NASA Technical Reports Server (NTRS)

    Langhoff, Stephen

    1996-01-01

    The theoretical work being carried out in the Computational Chemistry Branch at NASA Ames will be overviewed. This overview will be followed by a more in-depth discussion of our theoretical work to determine molecular opacities for the TiO and water molecules and a discussion of our density function theory (DFT) calculations to determine the harmonic frequencies and intensities to the vibrational bands of polycyclic aromatic hydrocarbons (PAHs) to assess their role as carriers to the unidentified infrared (UIR) bands. Finally, a more in-depth discussion of our work in the area of computational molecular nanotechnology will be presented.

  6. Modern statistical estimation via oracle inequalities

    NASA Astrophysics Data System (ADS)

    Candès, Emmanuel J.

    A number of fundamental results in modern statistical theory involve thresholding estimators. This survey paper aims at reconstructing the history of how thresholding rules came to be popular in statistics and describing, in a not overly technical way, the domain of their application. Two notions play a fundamental role in our narrative: sparsity and oracle inequalities. Sparsity is a property of the object to estimate, which seems to be characteristic of many modern problems, in statistics as well as applied mathematics and theoretical computer science, to name a few. `Oracle inequalities' are a powerful decision-theoretic tool which has served to understand the optimality of thresholding rules, but which has many other potential applications, some of which we will discuss.Our story is also the story of the dialogue between statistics and applied harmonic analysis. Starting with the work of Wiener, we will see that certain representations emerge as being optimal for estimation. A leitmotif throughout our exposition is that efficient representations lead to efficient estimation.

  7. Theoretical Overview on the Improvement of Interest in Learning Theoretical Course for Engineering Students

    ERIC Educational Resources Information Center

    Xiao, Manlin; Zhang, Jianglin

    2016-01-01

    The phenomenon that engineering students have little interest in theoretical knowledge learning is more and more apparent. Therefore, most students fail to understand and apply theories to solve practical problems. To solve this problem, the importance of improving students' interest in the learning theoretical course is discussed firstly in this…

  8. Theoretical analysis of tsunami generation by pyroclastic flows

    USGS Publications Warehouse

    Watts, P.; Waythomas, C.F.

    2003-01-01

    Pyroclastic flows are a common product of explosive volcanism and have the potential to initiate tsunamis whenever thick, dense flows encounter bodies of water. We evaluate the process of tsunami generation by pyroclastic flow by decomposing the pyroclastic flow into two components, the dense underflow portion, which we term the pyroclastic debris flow, and the plume, which includes the surge and coignimbrite ash cloud parts of the flow. We consider five possible wave generation mechanisms. These mechanisms consist of steam explosion, pyroclastic debris flow, plume pressure, plume shear, and pressure impulse wave generation. Our theoretical analysis of tsunami generation by these mechanisms provides an estimate of tsunami features such as a characteristic wave amplitude and wavelength. We find that in most situations, tsunami generation is dominated by the pyroclastic debris flow component of a pyroclastic flow. This work presents information sufficient to construct tsunami sources for an arbitrary pyroclastic flow interacting with most bodies of water. Copyright 2003 by the American Geophysical Union.

  9. Is extreme learning machine feasible? A theoretical assessment (part I).

    PubMed

    Liu, Xia; Lin, Shaobo; Fang, Jian; Xu, Zongben

    2015-01-01

    An extreme learning machine (ELM) is a feedforward neural network (FNN) like learning system whose connections with output neurons are adjustable, while the connections with and within hidden neurons are randomly fixed. Numerous applications have demonstrated the feasibility and high efficiency of ELM-like systems. It has, however, been open if this is true for any general applications. In this two-part paper, we conduct a comprehensive feasibility analysis of ELM. In Part I, we provide an answer to the question by theoretically justifying the following: 1) for some suitable activation functions, such as polynomials, Nadaraya-Watson and sigmoid functions, the ELM-like systems can attain the theoretical generalization bound of the FNNs with all connections adjusted, i.e., they do not degrade the generalization capability of the FNNs even when the connections with and within hidden neurons are randomly fixed; 2) the number of hidden neurons needed for an ELM-like system to achieve the theoretical bound can be estimated; and 3) whenever the activation function is taken as polynomial, the deduced hidden layer output matrix is of full column-rank, therefore the generalized inverse technique can be efficiently applied to yield the solution of an ELM-like system, and, furthermore, for the nonpolynomial case, the Tikhonov regularization can be applied to guarantee the weak regularity while not sacrificing the generalization capability. In Part II, however, we reveal a different aspect of the feasibility of ELM: there also exists some activation functions, which makes the corresponding ELM degrade the generalization capability. The obtained results underlie the feasibility and efficiency of ELM-like systems, and yield various generalizations and improvements of the systems as well.

  10. Speaking of Gender Identity: Theoretical Approaches.

    ERIC Educational Resources Information Center

    Freedman, Susan A.

    Various definitions of gender identity have ranged from recognition of one's biological sex to an individual's sense of masculinity or femininity. For the purpose of this paper, which examines some of the theoretical approaches to the subject, gender identity will be defined as "the degree to which individuals are 'aware' of and accept their…

  11. Theoretical Analysis of Canadian Lifelong Education Development

    ERIC Educational Resources Information Center

    Mukan, Natalia; Barabash, Olena; Busko, Maria

    2014-01-01

    In the article, the problem of Canadian lifelong education development has been studied. The main objectives of the article are defined as theoretical analysis of scientific and pedagogical literature which highlights different aspects of the research problem; periods of lifelong education development; and determination of lifelong learning role…

  12. Theoretical Perspectives of How Digital Natives Learn

    ERIC Educational Resources Information Center

    Kivunja, Charles

    2014-01-01

    Marck Prensky, an authority on teaching and learning especially with the aid of Information and Communication Technologies, has referred to 21st century children born after 1980 as "Digital Natives". This paper reviews literature of leaders in the field to shed some light on theoretical perspectives of how Digital Natives learn and how…

  13. Schooling for Social Change: Some Theoretical Issues.

    ERIC Educational Resources Information Center

    Emoungu, Paul-Albert

    1980-01-01

    Evaluates the ability of public schools in the United States to bring about desired social change. Argues that the notion of schooling for social change is both theoretically and empirically consistent with the structural-functionalism framework. Concludes that one must decide what one means by social change before deciding whether schooling can…

  14. Poverty and Delinquency: A Theoretical Review.

    ERIC Educational Resources Information Center

    Rodman, Hyman

    One of 52 theoretical papers on school crime and its relation to poverty, this chapter reviews the major cultural and structural statements on the relationship between poverty and delinquency. The value stretch perspective, stemming from research on family values and on aspirations is introduced in order to challenge and clarify the basic works of…

  15. Domain theoretic structures in quantum information theory

    NASA Astrophysics Data System (ADS)

    Feng, Johnny

    2011-12-01

    In this thesis, we continue the study of domain theoretic structures in quantum information theory initiated by Keye Martin and Bob Coecke in 2002. The first part of the thesis is focused on exploring the domain theoretic properties of qubit channels. We discover that the Scott continuous qubit channels are exactly those that are unital or constant. We then prove that the unital qubit channels form a continuous dcpo, and identify various measurements on them. We show that Holevo capacity is a measurement on unital qubit channels, and discover the natural measurement in this setting. We find that qubit channels also form a continuous dcpo, but capacity fails to be a measurement. In the second part we focus on the study of exact dcpos, a domain theoretic structure, closely related to continuous dcpos, possessed by quantum states. Exact dcpos admit a topology, called the exact topology, and we show that the exact topology has an order theoretic characterization similar to the characterization of the Scott topology on continuous dcpos. We then explore the connection between exact and continuous dcpos; first, by identifying an important set of points, called the split points, that distinguishes between exact and continuous structures; second, by exploring a continuous completion of exact dcpos, and showing that we can recover the exact topology from the Scott topology of the completion.

  16. Theoretical Developments in the Psychology of Aging.

    ERIC Educational Resources Information Center

    Schroots, Johannes J. F.

    1996-01-01

    Presents an overview of the most distinctive psychological theories of aging promulgated after World War II. Groups theoretical developments into three periods: (1) Classical Period, which includes developmental tasks/activity theory; (2) Modern Period, which includes theories on life-span development and aging; and (3) New Period, represented by…

  17. A Theoretic Context for the Writing Lab.

    ERIC Educational Resources Information Center

    Freedman, Aviva

    A brief overview of the history of teaching writing reveals a shift from an emphasis on the composed product to the composing process and provides writing teachers who work one-to-one with students with a theoretical seven-stage model of the composing process: starting-point, exploration, incubation, illumination, composing, reformulation, and…

  18. Theoretical Foundations for Website Design Courses.

    ERIC Educational Resources Information Center

    Walker, Kristin

    2002-01-01

    Considers how theoretical foundations in website design courses can facilitate students learning the genres of Internet communication. Proposes ways that theories can be integrated into website design courses. Focuses on two students' website portfolios and ways they utilize genre theory and activity theory discussed in class to produce websites…

  19. Theoretical Eclecticism in the College Classroom

    ERIC Educational Resources Information Center

    Morrone, Anastasia S.; Tarr, Terri A.

    2005-01-01

    In this article we argue that student learning is enhanced by "theoretical eclecticism," which we define as intentionally drawing on different theories of learning when making instructional decisions to provide students with the instructional support they need to be successful. We briefly review the literature on four views of learning and on…

  20. Papers on Theoretical Issues in Health Education.

    ERIC Educational Resources Information Center

    California Univ., Berkeley. School of Public Health.

    This document is a collection of 17 papers on theoretical issues in health education presented at the Dorothy Nyswander International Symposium. The introduction, entitled "Theory and Practice in Health Education: A Synthesis," attempts to highlight some of the features of these papers and their relevance for health education practice. The papers…

  1. Why Network? Theoretical Perspectives on Networking

    ERIC Educational Resources Information Center

    Muijs, Daniel; West, Mel; Ainscow, Mel

    2010-01-01

    In recent years, networking and collaboration have become increasingly popular in education. However, there is at present a lack of attention to the theoretical basis of networking, which could illuminate when and when not to network and under what conditions networks are likely to be successful. In this paper, we will attempt to sketch the…

  2. Hybrid quantum teleportation: A theoretical model

    SciTech Connect

    Takeda, Shuntaro; Mizuta, Takahiro; Fuwa, Maria; Yoshikawa, Jun-ichi; Yonezawa, Hidehiro; Furusawa, Akira

    2014-12-04

    Hybrid quantum teleportation – continuous-variable teleportation of qubits – is a promising approach for deterministically teleporting photonic qubits. We propose how to implement it with current technology. Our theoretical model shows that faithful qubit transfer can be achieved for this teleportation by choosing an optimal gain for the teleporter’s classical channel.

  3. Theoretical models of helicopter rotor noise

    NASA Technical Reports Server (NTRS)

    Hawkings, D. L.

    1978-01-01

    For low speed rotors, it is shown that unsteady load models are only partially successful in predicting experimental levels. A theoretical model is presented which leads to the concept of unsteady thickness noise. This gives better agreement with test results. For high speed rotors, it is argued that present models are incomplete and that other mechanisms are at work. Some possibilities are briefly discussed.

  4. Theoretical Foundations of Learning Environments. Second Edition

    ERIC Educational Resources Information Center

    Jonassen, David, Ed.; Land, Susan, Ed.

    2012-01-01

    "Theoretical Foundations of Learning Environments" provides students, faculty, and instructional designers with a clear, concise introduction to the major pedagogical and psychological theories and their implications for the design of new learning environments for schools, universities, or corporations. Leading experts describe the most…

  5. Theoretical Issues of the Constitutional Regulation Mechanism

    ERIC Educational Resources Information Center

    Zhussupova, Guldaray B.; Zhailyaubayev, Rassul T.; Ukin, Symbat K.; Shunayeva, Sylu M.; Nurmagambetov, Rachit G.

    2016-01-01

    The purpose of this research is to define the concept of "constitutional regulation mechanism." The definition of the concept of "constitutional regulation mechanism" will give jurists and legislators a theoretical framework for developing legal sciences, such as the constitutional law and the theory of state and law. The…

  6. New Theoretical Approach Integrated Education and Technology

    ERIC Educational Resources Information Center

    Ding, Gang

    2010-01-01

    The paper focuses on exploring new theoretical approach in education with development of online learning technology, from e-learning to u-learning and virtual reality technology, and points out possibilities such as constructing a new teaching ecological system, ubiquitous educational awareness with ubiquitous technology, and changing the…

  7. Theoretical Convergence in Assessment of Cognition

    ERIC Educational Resources Information Center

    Bowden, Stephen C.

    2013-01-01

    In surveying the literature on assessment of cognitive abilities in adults and children, it is easy to assume that the proliferation of test batteries and terminology reflects a poverty of unifying models. However, the lack of recognition accorded good models of cognitive abilities may reflect inattention to theoretical development and injudicious…

  8. Acting Out; Theoretical and Clinical Aspects.

    ERIC Educational Resources Information Center

    Abt, Lawrence Edwin, Ed.; Weissman, Stuart L.

    The beneficial and harmful effects of acting out are studied in a series of short essays by numerous authors. Included are four articles on the theoretical and dynamic considerations of acting out, along with five clinical manifestations of acting out involving suicide and criminality in adolescents and adults. Special forms of harmful acting out…

  9. Theoretical Studies in Elementary Particle Physics

    SciTech Connect

    Collins, John C.; Roiban, Radu S

    2013-04-01

    This final report summarizes work at Penn State University from June 1, 1990 to April 30, 2012. The work was in theoretical elementary particle physics. Many new results in perturbative QCD, in string theory, and in related areas were obtained, with a substantial impact on the experimental program.

  10. Theoretical Ecology: Beginnings of a Predictive Science

    ERIC Educational Resources Information Center

    Kolata, Gina Bari

    1974-01-01

    Examines new directions in ecological research in which ecologists are analyzing systems with theoretical models and are using descriptive studies to confirm and extend their studies. The development of a model relating to species equilibriums on islands is now being applied to problems of conservation of wildlife in national parks. (JR)

  11. Formation of Massive Stars: Theoretical Considerations

    NASA Technical Reports Server (NTRS)

    Yorke, Harold W.

    2008-01-01

    This slide presentation reviews theoretical considerations of the formation of massive stars. It addresses the questions that assuming a gravitationally unstable massive clump, how does enough material become concentrated into a sufficiently small volume within a sufficiently short time? and how does the forming massive star influence its immediate surroundings to limit its mass?

  12. Affine Isoperimetry and Information Theoretic Inequalities

    ERIC Educational Resources Information Center

    Lv, Songjun

    2012-01-01

    There are essential connections between the isoperimetric theory and information theoretic inequalities. In general, the Brunn-Minkowski inequality and the entropy power inequality, as well as the classical isoperimetric inequality and the classical entropy-moment inequality, turn out to be equivalent in some certain sense, respectively. Based on…

  13. An e-Learning Theoretical Framework

    ERIC Educational Resources Information Center

    Aparicio, Manuela; Bacao, Fernando; Oliveira, Tiago

    2016-01-01

    E-learning systems have witnessed a usage and research increase in the past decade. This article presents the e-learning concepts ecosystem. It summarizes the various scopes on e-learning studies. Here we propose an e-learning theoretical framework. This theory framework is based upon three principal dimensions: users, technology, and services…

  14. THEORETICAL PREREQUISITES FOR SECOND-LANGUAGE TEACHING.

    ERIC Educational Resources Information Center

    GEFEN, RAPHAEL

    SOUND LANGUAGE TEACHING RESTS ON THREE THEORETICAL BASES. THE FIRST OF THESE, A HYPOTHESIS OF LANGUAGE ACQUISITION, MAY BE ORIENTED TOWARD THE COGNITIVE APPROACH OR THE PERCEPTIVE APPROACH OR MAY REFLECT THE POINT OF VIEW OF THE BEHAVIORIST. THE SECOND, A MODEL OF GRAMMAR, MAY BE THE PRODUCT OF THE TRADITIONALIST, THE STRUCTURALIST, THE…

  15. Reversing Affirmative Action: A Theoretical Construct.

    ERIC Educational Resources Information Center

    Tryman, Mfanya Donald

    1986-01-01

    Presents a theoretical construct for understanding job discrimination and affirmative action in higher education. Focuses on three areas of concern: (1) job listings; (2) use of the terms "qualified" and "qualifications" to purge competent Black job candidates; and (3) the role of publications in tenure decisions. Outlines strategies for…

  16. Theoretical backgrounds for interpretation of spectroscopic observations

    NASA Astrophysics Data System (ADS)

    Hadrava, P.

    2013-02-01

    The advantage of analysis of observed data using their fit with theoretical model of the directly observed quantities is shown as well as the need for simultaneous solution of all available data. Some particular problems of disentangling of stellar spectra and model atmospheres of component stars of multiple systems are discussed.

  17. A Review of Theoretical and Empirical Advancements

    ERIC Educational Resources Information Center

    Wang, Mo; Henkens, Kene; van Solinge, Hanna

    2011-01-01

    In this article, we review both theoretical and empirical advancements in retirement adjustment research. After reviewing and integrating current theories about retirement adjustment, we propose a resource-based dynamic perspective to apply to the understanding of retirement adjustment. We then review empirical findings that are associated with…

  18. Theoretical Perspectives on Mathematics Teacher Change

    ERIC Educational Resources Information Center

    Goos, Merrilyn; Geiger, Vince

    2010-01-01

    In this review essay we critically examine issues raised by authors whose work is published in the two Special Issues of "JMTE" (Part 1, 13.5 and Part 2, 13.6) on Mathematics Teacher and Mathematics Teacher Educator Change--Insight through Theoretical Perspectives. While the authors have drawn on a wide range of theories and approaches, we have…

  19. Higgs boson couplings: Measurements and theoretical interpretation

    NASA Astrophysics Data System (ADS)

    Mariotti, Chiara; Passarino, Giampiero

    2017-02-01

    This report will review the Higgs boson properties: the mass, the total width and the couplings to fermions and bosons. The measurements have been performed with the data collected in 2011 and 2012 at the LHC accelerator at CERN by the ATLAS and CMS experiments. Theoretical frameworks to search for new physics are also introduced and discussed.

  20. Child Language Acquisition: Contrasting Theoretical Approaches

    ERIC Educational Resources Information Center

    Ambridge, Ben; Lieven, Elena V. M.

    2011-01-01

    Is children's language acquisition based on innate linguistic structures or built from cognitive and communicative skills? This book summarises the major theoretical debates in all of the core domains of child language acquisition research (phonology, word-learning, inflectional morphology, syntax and binding) and includes a complete introduction…

  1. Assessing Two Theoretical Frameworks of Civic Engagement

    ERIC Educational Resources Information Center

    García-Cabrero, Benilde; Pérez-Martínez, María Guadalupe; Sandoval-Hernández, Andrés; Caso-Niebla, Joaquín; Díaz-López, Carlos David

    2016-01-01

    The purpose of this study was to empirically test two major theoretical models: a modified version of the social capital model (Pattie, Seyd and Whiteley, 2003), and the Informed Social Engagement Model (Barr and Selman, 2014; Selman and Kwok, 2010), to explain civic participation and civic knowledge of adolescents from Chile, Colombia and Mexico,…

  2. Theoretical Studies of Atom Surface Interactions

    DTIC Science & Technology

    1981-02-01

    on Electron Stimulated Desorption 19. KEY WORDS (Continue on reverse side if necessary and Identify by block nualber) Theoretical Study Kinetic Theorm...roughly giveti by S.. by averaging thle homionuclea. diatomiic bond lengths.’ If thie molecule remains undiss6eiated, Ems c2/S , () then e2/S, is the

  3. Theoretically required urinary flow during high-dose methotrexate infusion.

    PubMed

    Sasaki, K; Tanaka, J; Fujimoto, T

    1984-01-01

    The renal excretion of methotrexate (MTX) and its major metabolite 7-hydroxymethotrexate (7-OH-MTX) was analysed in 12 children with malignancies during 52 courses of high-dose methotrexate (H-D-MTX) infusion at dosages ranging from 0.7 to 8.4 g/m2. The peak concentrations of both MTX and 7-OH-MTX exceeded the aqueous solubilities of these compounds at low pH (less than or equal to 6.0). The cumulative MTX excretion in urine was 75%-98% of the administered amount of MTX, and the cumulative 7-OH-MTX excretion in the urine was 3%-15%. The theoretically required urinary flow (TRUF) was estimated as the minimum urine volume needed for complete resolution of MTX and its metabolites in urine. TRUF during MTX infusion from 0 to 6 h and from 6 to 12 h was correlated with the dosage of MTX, and these values were 0.1-1.8 ml/min/m2 at pH 7.0, 0.5-11.1 ml/min/m2 at pH 6.0, and 1.9-42.2 ml/min/m2 at pH 5.0 with dosages of 0.7 to 8.4 g/m2. The value of the theoretically required urinary flow is important to ensure adequate hydration and the optimum alkalinization schedule for massive MTX infusion.

  4. Viscoelastic properties of the contracting detrusor. I. Theoretical basis.

    PubMed

    Venegas, J G

    1991-08-01

    This paper presents the theoretical basis for estimating the detrusor's viscoelastic properties using the small-amplitude oscillatory perturbations technique. Three possible configurations of the simplest second-order lumped-parameter model of the bladder were analyzed to derive equations of the parameters incremental resistance (R) and incremental elastance (K) in terms of the experimentally measurable magnitude and phase of hydrodynamic stiffness. In model I, single viscous, elastic, and inertial elements were assumed to to be connected in series. In model III the elastic and viscous elements were connected in series, but the inertial element was connected in parallel. With the assumption of a spherical geometry of the bladder, equations were also derived to obtain the bladder wall mechanical properties, spring incremental constant (S), and muscle incremental viscosity (b) as functions of bladder volume and the hydrodynamic properties R and K. Integration of the incremental equation describing the viscous component yields an expression that fits well the force-velocity experimental data from bladder strips reported by others. This finding suggests that muscle viscosity measured with the small-amplitude oscillations and analyzed with the proper theoretical model may be related to the force-velocity characteristics of the muscle. The equations delivered here form the basis for analyzing the experimental data described in the companion paper.

  5. Information theoretic model selection applied to supernovae data

    NASA Astrophysics Data System (ADS)

    Biesiada, Marek

    2007-02-01

    Current advances in observational cosmology suggest that our Universe is flat and dominated by dark energy. There are several different theoretical ideas invoked to explain the dark energy with relatively little guidance of which one of them might be right. Therefore the emphasis of ongoing and forthcoming research in this field shifts from estimating specific parameters of the cosmological model to the model selection. In this paper we apply an information theoretic model selection approach based on the Akaike criterion as an estimator of Kullback Leibler entropy. Although this approach has already been used by some authors in a similar context, this paper provides a more systematic introduction to the Akaike criterion. In particular, we present the proper way of ranking the competing models on the basis of Akaike weights (in Bayesian language: posterior probabilities of the models). This important ingredient is lacking from alternative studies dealing with cosmological applications of the Akaike criterion. Of the many particular models of dark energy we focus on four: quintessence, quintessence with a time varying equation of state, the braneworld scenario and the generalized Chaplygin gas model, and test them on Riess's gold sample. As a result we obtain that the best model—in terms of the Akaike criterion—is the quintessence model. The odds suggest that although there exist differences in the support given to specific scenarios by supernova data, most of the models considered receive similar support. The only exception is the Chaplygin gas which is considerably less supported. One can also note that models similar in structure, e.g. ΛCDM, quintessence and quintessence with a variable equation of state, are closer to each other in terms of Kullback Leibler entropy. Models having different structure, e.g. Chaplygin gas and the braneworld scenario, are more distant (in the Kullback Leibler sense) from the best one.

  6. Stability of suspensions: theoretical and practical considerations before compounding.

    PubMed

    Hadžiabdić, Jasmina; Elezović, Alisa; Rahić, Ognjenka; Mujezin, Indira; Vranić, Edina

    2015-01-01

    Suspension stability can be theoretically estimated prior to the beginning of the formulating process based on the solid phase particle size, liquid phase density, and viscosity. Stokes equation can be used to predict suspension stability in order to save time and resources. The examples of these calculations for the assessment of suspension physical characteristics are given in this article. One parameter that cannot be theoretically estimated with precision is flocculation/deflocculation. Flocculation can be experimentally determined using the "jar test," and it is a critical parameter for the substances showing inclination toward caking. Suspensions will sediment in time; however, it is their key feature to be able to redisperse in order to preserve the efficacy and proper dosage. Bismuth subnitrate is practically insoluble in water, which makes it convenient for oral pharmaceutical suspensions, rather than the other pharmaceutical forms. Like the other bismuth compounds, it tends to cake in aqueous medium. In order to prevent formation of the solid sediment, controlled flocculation of the suspended bismuth subnitrate particles is recommended. The effect of the excipients (sodium citrate, Tween 20, propylene glycol, microcrystalline cellulose) on the transmittance of the prepared suspensions and the quantity and characteristics of the formed sediment were evaluated. Suspensions containing sodium citrate, as well as the formulations with sodium citrate and microcrystalline cellulose, based on their transmittance characteristics, were determined to be flocculating suspensions, regardless of the sodium citrate concentration used. The highest affinity towards formation of flocculating suspensions, with the highest transmittance value had microcrystalline cellulose with 15% (w/w) sodium citrate.

  7. Estimation in satellite control.

    NASA Technical Reports Server (NTRS)

    Debra, D. B.

    1971-01-01

    The use of estimators or observers is discussed as applied to satellite attitude control and the control of drag-free satellites. The practical problems of implementation are discussed, and the relative advantages of full and reduced state estimators are compared, particularly in terms of their effectiveness and bandwidth as filters. Three applications are used to illustrate the principles. They are: (1) a reaction wheel control system, (2) a spinning attitude control system, and (3) a drag-free satellite translational control system. Fixed estimator gains are shown to be adequate for these (and many other) applications. Our experience in the hardware realization of estimators has led to categorize the error sources in terms of those that improve with increased estimator gains and those that get worse with increased estimator gains.

  8. Multitaper Spectrum Estimates

    NASA Astrophysics Data System (ADS)

    Fodor, I. K.; Stark, P. B.

    Multitapering is a statistical technique developed to improve on the notorious periodogram estimate of the power spectrum (Thomson, 1982; Percival, Walden 1993). We show how to obtain orthogonal tapers for time series observed with gaps, and how to use statistical resampling techniques (Efron, Tibshirani 1993) to calculate realistic uncertainty estimates for multitaper estimates. We introduce multisegment multitapering. Multitapering can also be extended to the 2D case. We indicate how to construct tapers that minimize the spatial leakage in estimates of the spherical harmonic decomposition of the velocity images. Spatial multitapering followed by the temporal tapering of the estimated spherical harmonic time series is expected to result in improved spectrum and subsequent solar oscillation mode parameter estimates.

  9. Estimating Airline Operating Costs

    NASA Technical Reports Server (NTRS)

    Maddalon, D. V.

    1978-01-01

    The factors affecting commercial aircraft operating and delay costs were used to develop an airline operating cost model which includes a method for estimating the labor and material costs of individual airframe maintenance systems. The model permits estimates of aircraft related costs, i.e., aircraft service, landing fees, flight attendants, and control fees. A method for estimating the costs of certain types of airline delay is also described.

  10. Estimating Prices of Products

    NASA Technical Reports Server (NTRS)

    Aster, R. W.; Chamberlain, R. G.; Zendejas, S. C.; Lee, T. S.; Malhotra, S.

    1986-01-01

    Company-wide or process-wide production simulated. Price Estimation Guidelines (IPEG) program provides simple, accurate estimates of prices of manufactured products. Simplification of SAMIS allows analyst with limited time and computing resources to perform greater number of sensitivity studies. Although developed for photovoltaic industry, readily adaptable to standard assembly-line type of manufacturing industry. IPEG program estimates annual production price per unit. IPEG/PC program written in TURBO PASCAL.

  11. Reservoir Temperature Estimator

    SciTech Connect

    Palmer, Carl D.

    2014-12-08

    The Reservoir Temperature Estimator (RTEst) is a program that can be used to estimate deep geothermal reservoir temperature and chemical parameters such as CO2 fugacity based on the water chemistry of shallower, cooler reservoir fluids. This code uses the plugin features provided in The Geochemist’s Workbench (Bethke and Yeakel, 2011) and interfaces with the model-independent parameter estimation code Pest (Doherty, 2005) to provide for optimization of the estimated parameters based on the minimization of the weighted sum of squares of a set of saturation indexes from a user-provided mineral assemblage.

  12. Parameter estimating state reconstruction

    NASA Technical Reports Server (NTRS)

    George, E. B.

    1976-01-01

    Parameter estimation is considered for systems whose entire state cannot be measured. Linear observers are designed to recover the unmeasured states to a sufficient accuracy to permit the estimation process. There are three distinct dynamics that must be accommodated in the system design: the dynamics of the plant, the dynamics of the observer, and the system updating of the parameter estimation. The latter two are designed to minimize interaction of the involved systems. These techniques are extended to weakly nonlinear systems. The application to a simulation of a space shuttle POGO system test is of particular interest. A nonlinear simulation of the system is developed, observers designed, and the parameters estimated.

  13. Covariance estimators for generalized estimating equations (GEE) in longitudinal analysis with small samples.

    PubMed

    Wang, Ming; Kong, Lan; Li, Zheng; Zhang, Lijun

    2016-05-10

    Generalized estimating equations (GEE) is a general statistical method to fit marginal models for longitudinal data in biomedical studies. The variance-covariance matrix of the regression parameter coefficients is usually estimated by a robust "sandwich" variance estimator, which does not perform satisfactorily when the sample size is small. To reduce the downward bias and improve the efficiency, several modified variance estimators have been proposed for bias-correction or efficiency improvement. In this paper, we provide a comprehensive review on recent developments of modified variance estimators and compare their small-sample performance theoretically and numerically through simulation and real data examples. In particular, Wald tests and t-tests based on different variance estimators are used for hypothesis testing, and the guideline on appropriate sample sizes for each estimator is provided for preserving type I error in general cases based on numerical results. Moreover, we develop a user-friendly R package "geesmv" incorporating all of these variance estimators for public usage in practice.

  14. Nest survival estimation: a review of alternatives to the Mayfield estimator

    USGS Publications Warehouse

    Jehle, G.; Yackel Adams, A.A.; Savidge, J.A.; Skagen, S.K.

    2004-01-01

    Reliable estimates of nest survival are essential for assessing strategies for avian conservation. We review the history of modifications and alternatives for estimating nest survival, with a focus on four techniques: apparent nest success, the Mayfield estimator, the Stanley method, and program MARK. The widely used Mayfield method avoids the known positive bias inherent in apparent nest success by estimating daily survival rates using the number of exposure days, eliminating the need to monitor nests from initiation. Concerns that some of Mayfield's assumptions were restrictive stimulated the development of new techniques. Stanley's method allows for calculation of stage-specific daily survival rates when transition and failure dates are unknown, and eliminates Mayfield's assumption that failure occurred midway through the nest-check interval. Program MARK obviates Mayfield's assumption of constant daily survival within nesting stages and evaluates variation in nest survival as a function of biologically relevant factors. These innovative methods facilitate the evaluation of nest survival using an information-theoretic approach. We illustrate use of these methods with Lark Bunting (Calamospiza melanocorys) nest data from the Pawnee National Grassland, Colorado. Nest survival estimates calculated using Mayfield, Stanley, and MARK methods were similar, but apparent nest success estimates ranged 1-24% greater than the other estimates. MARK analysis revealed that survival of Lark Bunting nests differed between site-year groups, declined with both nest age and time in season, but did not vary with weather parameters. We encourage researchers to use these approaches to gain reliable and meaningful nest survival estimates.

  15. Theoretical and observational determinations of the ionization coefficient of meteors

    NASA Astrophysics Data System (ADS)

    Jones, William

    1997-07-01

    km s^-1), for which no significant secondary ionization or recombination will take place. The theoretical results may be approximated by the analytic form beta~=9.4x10^-6 (v-10)^2v^0.8, where the velocity v is in km s^-1. For visual meteors in the range of about 30 to 60 km s^-1, we propose as a reasonable approximation the result we have obtained from the Verniani-Hawkins observational data using simulation results for the luminosity: beta=4.91x10^-6v^2.25. At present, however, we are unable to propose estimates of beta for slow bright meteors or fast radio meteors.

  16. Estimating Latent Distributions.

    ERIC Educational Resources Information Center

    Mislevy, Robert J.

    1984-01-01

    Assuming vectors of item responses depend on ability through a fully specified item response model, this paper presents maximum likelihood equations for estimating the population parameters without estimating an ability parameter for each subject. Asymptotic standard errors, tests of fit, computing approximations, and details of four special cases…

  17. Estimating mutual information

    NASA Astrophysics Data System (ADS)

    Kraskov, Alexander; Stögbauer, Harald; Grassberger, Peter

    2004-06-01

    We present two classes of improved estimators for mutual information M(X,Y) , from samples of random points distributed according to some joint probability density μ(x,y) . In contrast to conventional estimators based on binnings, they are based on entropy estimates from k -nearest neighbor distances. This means that they are data efficient (with k=1 we resolve structures down to the smallest possible scales), adaptive (the resolution is higher where data are more numerous), and have minimal bias. Indeed, the bias of the underlying entropy estimates is mainly due to nonuniformity of the density at the smallest resolved scale, giving typically systematic errors which scale as functions of k/N for N points. Numerically, we find that both families become exact for independent distributions, i.e. the estimator M̂ (X,Y) vanishes (up to statistical fluctuations) if μ(x,y)=μ(x)μ(y) . This holds for all tested marginal distributions and for all dimensions of x and y . In addition, we give estimators for redundancies between more than two random variables. We compare our algorithms in detail with existing algorithms. Finally, we demonstrate the usefulness of our estimators for assessing the actual independence of components obtained from independent component analysis (ICA), for improving ICA, and for estimating the reliability of blind source separation.

  18. Fano factor estimation.

    PubMed

    Rajdl, Kamil; Lansky, Petr

    2014-02-01

    Fano factor is one of the most widely used measures of variability of spike trains. Its standard estimator is the ratio of sample variance to sample mean of spike counts observed in a time window and the quality of the estimator strongly depends on the length of the window. We investigate this dependence under the assumption that the spike train behaves as an equilibrium renewal process. It is shown what characteristics of the spike train have large effect on the estimator bias. Namely, the effect of refractory period is analytically evaluated. Next, we create an approximate asymptotic formula for the mean square error of the estimator, which can also be used to find minimum of the error in estimation from single spike trains. The accuracy of the Fano factor estimator is compared with the accuracy of the estimator based on the squared coefficient of variation. All the results are illustrated for spike trains with gamma and inverse Gaussian probability distributions of interspike intervals. Finally, we discuss possibilities of how to select a suitable observation window for the Fano factor estimation.

  19. Time Delay Estimation

    DTIC Science & Technology

    2006-01-01

    investigate the possibility of exploiting the properties of a detected Low Probability of Intercept (LPI) signal waveform to estimate time delay, and by...ratios, namely 10 dB and less. We also examine the minimum time –delay estimate error – the Cramer–Rao bound. The results indicate that the method

  20. Analysis of DOA estimation spatial resolution using MUSIC algorithm

    NASA Astrophysics Data System (ADS)

    Guo, Yue; Wang, Hongyuan; Luo, Bin

    2005-11-01

    This paper presents a performance analysis of the spatial resolution of the direction of arrival (DOA) estimates attained by the multiple signal classification (MUSIC) algorithm for uncorrelated sources. The confidence interval of estimation angle which is much more intuitionistic will be considered as the new evaluation standard for the spatial resolution. Then, based on the statistic method, the qualitative analysis reveals the factors influencing the performance of the MUSIC algorithm. At last, quantitative simulations prove the theoretical analysis result exactly.

  1. Robust incremental condition estimation

    SciTech Connect

    Bischof, C.H.; Tang, P.T.P.

    1991-03-29

    This paper presents an improved version of incremental condition estimation, a technique for tracking the extremal singular values of a triangular matrix as it is being constructed one column at a time. We present a new motivation for this estimation technique using orthogonal projections. The paper focuses on an implementation of this estimation scheme in an accurate and consistent fashion. In particular, we address the subtle numerical issues arising in the computation of the eigensystem of a symmetric rank-one perturbed diagonal 2 {times} 2 matrix. Experimental results show that the resulting scheme does a good job in estimating the extremal singular values of triangular matrices, independent of matrix size and matrix condition number, and that it performs qualitatively in the same fashion as some of the commonly used nonincremental condition estimation schemes.

  2. Theoretical Modeling for Hepatic Microwave Ablation

    PubMed Central

    Prakash, Punit

    2010-01-01

    Thermal tissue ablation is an interventional procedure increasingly being used for treatment of diverse medical conditions. Microwave ablation is emerging as an attractive modality for thermal therapy of large soft tissue targets in short periods of time, making it particularly suitable for ablation of hepatic and other tumors. Theoretical models of the ablation process are a powerful tool for predicting the temperature profile in tissue and resultant tissue damage created by ablation devices. These models play an important role in the design and optimization of devices for microwave tissue ablation. Furthermore, they are a useful tool for exploring and planning treatment delivery strategies. This review describes the status of theoretical models developed for microwave tissue ablation. It also reviews current challenges, research trends and progress towards development of accurate models for high temperature microwave tissue ablation. PMID:20309393

  3. Evolution of Theoretical Perspectives in My Research

    NASA Astrophysics Data System (ADS)

    Otero, Valerie K.

    2009-11-01

    Over the past 10 years I have been using socio-cultural theoretical perspectives to understand how people learn physics in a highly interactive, inquiry-based physics course such as Physics and Everyday Thinking [1]. As a result of using various perspectives (e.g. Distributed Cognition and Vygotsky's Theory of Concept Formation), my understanding of how these perspectives can be useful for investigating students' learning processes has changed. In this paper, I illustrate changes in my thinking about the role of socio-cultural perspectives in understanding physics learning and describe elements of my thinking that have remained fairly stable. Finally, I will discuss pitfalls in the use of certain perspectives and discuss areas that need attention in theoretical development for PER.

  4. Center of Excellence in Theoretical Geoplasma Research

    NASA Astrophysics Data System (ADS)

    Chang, Tom

    1993-08-01

    The Center for Theoretical Geoplasma Physics was established at MIT in 1986 through an AFOSR University Research Initiative grant. The goal of the Center since its inception has been to develop and maintain a program of excellence in interdisciplinary geoplasma research involving the mutual interaction of ionospheric scientists, aeronomists, plasma physicists, and numerical analysts. During the past six years, members of the center have made germinal contributions to a number of definitive research findings in the fundamental understanding of ionospheric turbulence, particle acceleration, and the phenomenon of coupling between the ionosphere and magnetosphere. Some of the results of these research activities have already found practical applications toward the mission of the Air Force by scientists at the Geophysics Directorate of the Phillips Laboratory, particularly those affiliated with the research group headed by Dr. J.R. Jasperse of the Ionospheric Effects Branch. Theoretical geoplasma physics, URI Program.

  5. Tracking controlled chaos: Theoretical foundations and applications.

    PubMed

    Schwartz, Ira B.; Carr, Thomas W.; Triandaf, Ioana

    1997-12-01

    Tracking controlled states over a large range of accessible parameters is a process which allows for the experimental continuation of unstable states in both chaotic and non-chaotic parameter regions of interest. In algorithmic form, tracking allows experimentalists to examine many of the unstable states responsible for much of the observed nonlinear dynamic phenomena. Here we present a theoretical foundation for tracking controlled states from both dynamical systems as well as control theoretic viewpoints. The theory is constructive and shows explicitly how to track a curve of unstable states as a parameter is changed. Applications of the theory to various forms of control currently used in dynamical system experiments are discussed. Examples from both numerical and physical experiments are given to illustrate the wide range of tracking applications. (c) 1997 American Institute of Physics.

  6. Masochism: a clinical and theoretical overview.

    PubMed

    Sack, R L; Miller, W

    1975-08-01

    This paper will review some of the theoretical and clinical features of masochism from an eclectic point of view. The topic of masochism has been taken up by authors of many perspectives because it addresses one of the anomalous, absurd, difficult-to-explain aspects of behavior for which no psychological system has an easy answer. Therefore, a wide-ranging literature on the topic of masochism is available. However, few previous reviewers have attempted to draw from a variety of disciplines and theoretical frameworks. In this review the historical development of the term and some of the psychoanalytic conceptualizations will be presented first. Since previous reviews of masochism from a strictly psychoanalytic perspective are adequate (Brenner, 1959; Eisenbud, 1967; Fenichel, 1945; Loewenstein, 1957; Panken, 1967), our discussions of masochism will be developed employing more extensively the interpersonal, social, learning theory, and biological perspectives.

  7. Theoretical simulation of scanning probe microscopy.

    PubMed

    Tsukada, Masaru

    2011-01-01

    Methods of theoretical simulation of scanning probe microscopy, including scanning tunneling microscopy (STM), atomic force microscopy(AFM) and Kelvin prove force microscopy (KPFM) have been reviewed with recent topics as case studies. For the case of the STM simulation, the importance of the tip electronic states is emphasized and some advanced formalism is presented based on the non-equilibrium Green's function theory beyond Bardeen's perturbation theory. For the simulation of AFM, we show examples of 3D-force map for AFM in water, and theoretical analyses for a nano-mechanical experiment on a protein molecule. An attempt to simulate KPFM images based on the electrostatic multi-pole interaction between a tip and a sample is also introduced.

  8. Theoretical considerations for oocyte cryopreservation by freezing.

    PubMed

    Fahy, Gregory M

    2007-06-01

    Attempts to cryopreserve oocytes by freezing have, to date, been based mostly on empirical approaches rather than on basic principles, and perhaps in part for this reason have not been very successful. Theoretical considerations suggest some fairly 'heretical' conclusions. The concentrations of permeating cryoprotectants employed in past studies have probably been inadequate, and the choice of propylene glycol (PG) as a protective agent is questionable. The use of non-penetrating agents, such as sucrose to preshrink oocytes prior to freezing and which, therefore, exacerbate osmotic stress during freezing, may be inappropriate, yet may protect in part by reducing the concentration of PG during freezing. The methods used to add and remove cryoprotectant may be suboptimal, and may be based on an inadequate understanding of the cryobiological constraints for oocyte survival. Given these concerns, it is not surprising that fully satisfactory results have been elusive, but there is every reason to believe that greater success is possible using a more theoretically appropriate approach.

  9. Theoretical description of metabolism using queueing theory.

    PubMed

    Evstigneev, Vladyslav P; Holyavka, Marina G; Khrapatiy, Sergii V; Evstigneev, Maxim P

    2014-09-01

    A theoretical description of the process of metabolism has been developed on the basis of the Pachinko model (see Nicholson and Wilson in Nat Rev Drug Discov 2:668-676, 2003) and the queueing theory. The suggested approach relies on the probabilistic nature of the metabolic events and the Poisson distribution of the incoming flow of substrate molecules. The main focus of the work is an output flow of metabolites or the effectiveness of metabolism process. Two simplest models have been analyzed: short- and long-living complexes of the source molecules with a metabolizing point (Hole) without queuing. It has been concluded that the approach based on queueing theory enables a very broad range of metabolic events to be described theoretically from a single probabilistic point of view.

  10. Biology is more theoretical than physics

    PubMed Central

    Gunawardena, Jeremy

    2013-01-01

    The word “theory” is used in at least two senses—to denote a body of widely accepted laws or principles, as in “Darwinian theory” or “quantum theory,” and to suggest a speculative hypothesis, often relying on mathematical analysis, that has not been experimentally confirmed. It is often said that there is no place for the second kind of theory in biology and that biology is not theoretical but based on interpretation of data. Here, ideas from a previous essay are expanded upon to suggest, to the contrary, that the second kind of theory has always played a critical role and that biology, therefore, is a good deal more theoretical than physics. PMID:23765269

  11. Theoretical limits on detection and analysis of small earthquakes

    NASA Astrophysics Data System (ADS)

    Kwiatek, Grzegorz; Ben-Zion, Yehuda

    2016-08-01

    We investigate theoretical limits on detection and reliable estimates of source characteristics of small earthquakes using synthetic seismograms for shear/tensile dislocations on kinematic circular ruptures and observed seismic noise and properties of several acquisition systems (instrument response, sampling rate). Simulated source time functions for shear/tensile dislocation events with different magnitudes, static stress drops, and rupture velocities provide estimates for the amplitude and frequency content of P and S phases at various observation angles. The source time functions are convolved with a Green's function for a homogenous solid assuming given P, S wave velocities and attenuation coefficients and a given instrument response. The synthetic waveforms are superposed with average levels of the observed ambient seismic noise up to 1 kHz. The combined seismograms are used to calculate signal-to-noise ratios and expected frequency content of P and S phases at various locations. The synthetic simulations of signal-to-noise ratio reproduce observed ratios extracted from several well-recorded data sets. The results provide guidelines on detection of small events in various geological environments, along with information relevant to reliable analyses of earthquake source properties.

  12. Theoretical, Methodological, and Empirical Approaches to Cost Savings: A Compendium

    SciTech Connect

    M Weimar

    1998-12-10

    This publication summarizes and contains the original documentation for understanding why the U.S. Department of Energy's (DOE's) privatization approach provides cost savings and the different approaches that could be used in calculating cost savings for the Tank Waste Remediation System (TWRS) Phase I contract. The initial section summarizes the approaches in the different papers. The appendices are the individual source papers which have been reviewed by individuals outside of the Pacific Northwest National Laboratory and the TWRS Program. Appendix A provides a theoretical basis for and estimate of the level of savings that can be" obtained from a fixed-priced contract with performance risk maintained by the contractor. Appendix B provides the methodology for determining cost savings when comparing a fixed-priced contractor with a Management and Operations (M&O) contractor (cost-plus contractor). Appendix C summarizes the economic model used to calculate cost savings and provides hypothetical output from preliminary calculations. Appendix D provides the summary of the approach for the DOE-Richland Operations Office (RL) estimate of the M&O contractor to perform the same work as BNFL Inc. Appendix E contains information on cost growth and per metric ton of glass costs for high-level waste at two other DOE sites, West Valley and Savannah River. Appendix F addresses a risk allocation analysis of the BNFL proposal that indicates,that the current approach is still better than the alternative.

  13. Theoretical and Experimental Study of Thermoacoustic Engines

    DTIC Science & Technology

    1992-12-01

    61153N11 uri5005 11. TITLE (Include Security Classification) Theoretical and Experimental Study of Thermoacoustic Engines 12 . PERSONAL AUTHOR(S) Richard...9/30 92/ 12 /31 16. SUPPLEMENTARY NOTATION 17. COSATI CODES 18. SUBJECT TERMS (Continue on reverse if necessary and identify by block number) FIELD...central portion. Copper rings of thickness 3.2 mm, inner radius 4.32 cm, and outer radius of 12 cm were supported between the ends of the ceramic piece

  14. A Theoretical Investigation of Acoustic Cavitation.

    DTIC Science & Technology

    1985-07-15

    dynamics known as the Rayleigh - Plesset equation. This equation was shown to work quite well under some conditions. , Recent experiments have shown...that when the acoustic driving frequency is near one of A-Oe bubble’s harmonic resonances, the theoretical values predicted by the Rayleigh - Plesset ...equation are inconsistent with observed values. This inconsistency lead Prosperetti to consider the internal pressure term in the Rayleigh - Plesset

  15. Theoretical maximum concentration factors for solar concentrators

    SciTech Connect

    Nicolas, R.O.; Duran, J.C.

    1984-11-01

    The theoretical maximum concentration factors are determined for different definitions of the factor for two-dimensional and three-dimensional solar concentrators that are valid for any source with nonuniform intensity distribution. Results are obtained starting from those derived by Winston (1970) for Lambertian sources. In particular, maximum concentration factors for three models of the solar-disk intensity distribution are calculated. 12 references.

  16. Game Theoretic Approaches to Protect Cyberspace

    DTIC Science & Technology

    2010-04-20

    Theoretic Approaches to Protect Cyberspace 5a. CONTRACT NUMBER 5b. GRANT NUMBER N00014-09-1-0752 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR( S ...Sajjan Shiva, Dipankar Dasgupta, Qishi Wu 5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME( S ...PERFORMING ORGANIZATION REPORT NUMBER CS-10-001 9. SPONSORING / MONITORING AGENCY NAME( S ) AND ADDRESS(ES) Office of Naval Research 875 North

  17. Theoretical investigation of gas-surface interactions

    NASA Technical Reports Server (NTRS)

    Lee, Timothy J.

    1989-01-01

    Four reprints are presented from four projects which are to be published in a refereed journal. Two are of interest to us and are presented herein. One is a description of a very detailed theoretical study of four anionic hydrogen bonded complexes. The other is a detailed study of the first generally reliable diagnostic for determining the quality of results that may be expected from single reference based electron correlation methods.

  18. Theoretical Studies of Rare Gas Halide Systems

    DTIC Science & Technology

    1988-11-01

    57.0 Present2 2.51 29 925 2736 62.5 Present 2.44 33 216 2728 56.0 Matcha and Milleur 2.57 21 940 2771 Chupka and 5 Russell 2.53 33 635 The basis set of...Xe+H." The Journal of Chemical Physics, Vol. 68, No. 11, pp. 4917-4929, June 1978. 8. Matcha , R.L., and Milleur, M.B., "Theoretical Studies of

  19. Spinning fluids: A group theoretical approach

    NASA Astrophysics Data System (ADS)

    Capasso, Dario; Sarkar, Debajyoti

    2014-04-01

    The aim of this article is to introduce a Lagrangian formulation of relativistic non-Abelian spinning fluids in group theory language. The corresponding Mathisson-Papapetrou equation for spinning fluids in terms of the reduction limit of the de Sitter group has been proposed. The equation we find correctly boils down to the one for nonspinning fluids. Two alternative approaches based on a group theoretical formulation of particle dynamics are also explored.

  20. Theoretical nuclear structure. Progress report for 1997

    SciTech Connect

    Nazarewicz, W.; Strayer, M.R.

    1997-12-31

    This research effort is directed toward theoretical support and guidance for the fields of radioactive ion beam physics, gamma-ray spectroscopy, and the interface between nuclear structure and nuclear astrophysics. The authors report substantial progress in all these areas. One measure of progress is publications and invited material. The research described here has led to more than 25 papers that are published, accepted, or submitted to refereed journals, and to 25 invited presentations at conferences and workshops.

  1. Observational and theoretical investigations in solar seismology

    NASA Technical Reports Server (NTRS)

    Noyes, Robert W.

    1992-01-01

    This is the final report on a project to develop a theoretical basis for interpreting solar oscillation data in terms of the interior dynamics and structure of the Sun. The topics covered include the following: (1) studies of the helioseismic signatures of differential rotation and convection in the solar interior; (2) wave generation by turbulent convection; and (3) the study of antipodal sunspot imaging of an active region tomography.

  2. Theoretical Calculations of Atomic Data for Spectroscopy

    NASA Technical Reports Server (NTRS)

    Bautista, Manuel A.

    2000-01-01

    Several different approximations and techniques have been developed for the calculation of atomic structure, ionization, and excitation of atoms and ions. These techniques have been used to compute large amounts of spectroscopic data of various levels of accuracy. This paper presents a review of these theoretical methods to help non-experts in atomic physics to better understand the qualities and limitations of various data sources and assess how reliable are spectral models based on those data.

  3. Theoretical resources for a globalised bioethics.

    PubMed

    Verkerk, Marian A; Lindemann, Hilde

    2011-02-01

    In an age of global capitalism, pandemics, far-flung biobanks, multinational drug trials and telemedicine it is impossible for bioethicists to ignore the global dimensions of their field. However, if they are to do good work on the issues that globalisation requires of them, they need theoretical resources that are up to the task. This paper identifies four distinct understandings of 'globalised' in the bioethics literature: (1) a focus on global issues; (2) an attempt to develop a universal ethical theory that can transcend cultural differences; (3) an awareness of how bioethics itself has expanded, with new centres and journals emerging in nearly every corner of the globe; (4) a concern to avoid cultural imperialism in encounters with other societies. Each of these approaches to globalisation has some merit, as will be shown. The difficulty with them is that the standard theoretical tools on which they rely are not designed for cross-cultural ethical reflection. As a result, they leave important considerations hidden. A set of theoretical resources is proposed to deal with the moral puzzles of globalisation. Abandoning idealised moral theory, a normative framework is developed that is sensitive enough to account for differences without losing the broader context in which ethical issues arise. An empirically nourished, self-reflexive, socially inquisitive, politically critical and inclusive ethics allows bioethicists the flexibility they need to pick up on the morally relevant particulars of this situation here without losing sight of the broader cultural contexts in which it all takes place.

  4. Theoretical study on a water muffler

    NASA Astrophysics Data System (ADS)

    Du, T.; Chen, Y. W.; Miao, T. C.; Wu, D. Z.

    2016-05-01

    Theoretical computation on a previously studied water muffler is carried out in this article. Structure of the water muffler is composed of two main parts, namely, the Kevlar- reinforced rubber tube and the inner-noise-reduction structure. Rubber wall of the rubber tube is assumed to function as rigid wall lined with sound absorption material and is described by a complex radial wave number. Comparison among the results obtained from theoretical computation, FEM (finite element method) simulation and experiment of the rubber tube and that of the water muffler has been made. The theoretical results show a good accordance in general tendency with the FEM simulated and the measured results. After that, parametric study on the diameter of the inner structure and that of the rubber tube is conducted. Results show that the diameter of the left inner structure has the most significant effect on the SPL of the water muffler due to its location and its effect on the diameter ratio D2/D1.

  5. Kalman Filter Constraint Tuning for Turbofan Engine Health Estimation

    NASA Technical Reports Server (NTRS)

    Simon, Dan; Simon, Donald L.

    2005-01-01

    Kalman filters are often used to estimate the state variables of a dynamic system. However, in the application of Kalman filters some known signal information is often either ignored or dealt with heuristically. For instance, state variable constraints are often neglected because they do not fit easily into the structure of the Kalman filter. Recently published work has shown a new method for incorporating state variable inequality constraints in the Kalman filter, which has been shown to generally improve the filter s estimation accuracy. However, the incorporation of inequality constraints poses some risk to the estimation accuracy as the Kalman filter is theoretically optimal. This paper proposes a way to tune the filter constraints so that the state estimates follow the unconstrained (theoretically optimal) filter when the confidence in the unconstrained filter is high. When confidence in the unconstrained filter is not so high, then we use our heuristic knowledge to constrain the state estimates. The confidence measure is based on the agreement of measurement residuals with their theoretical values. The algorithm is demonstrated on a linearized simulation of a turbofan engine to estimate engine health.

  6. Bayes Error Rate Estimation Using Classifier Ensembles

    NASA Technical Reports Server (NTRS)

    Tumer, Kagan; Ghosh, Joydeep

    2003-01-01

    The Bayes error rate gives a statistical lower bound on the error achievable for a given classification problem and the associated choice of features. By reliably estimating th is rate, one can assess the usefulness of the feature set that is being used for classification. Moreover, by comparing the accuracy achieved by a given classifier with the Bayes rate, one can quantify how effective that classifier is. Classical approaches for estimating or finding bounds for the Bayes error, in general, yield rather weak results for small sample sizes; unless the problem has some simple characteristics, such as Gaussian class-conditional likelihoods. This article shows how the outputs of a classifier ensemble can be used to provide reliable and easily obtainable estimates of the Bayes error with negligible extra computation. Three methods of varying sophistication are described. First, we present a framework that estimates the Bayes error when multiple classifiers, each providing an estimate of the a posteriori class probabilities, a recombined through averaging. Second, we bolster this approach by adding an information theoretic measure of output correlation to the estimate. Finally, we discuss a more general method that just looks at the class labels indicated by ensem ble members and provides error estimates based on the disagreements among classifiers. The methods are illustrated for artificial data, a difficult four-class problem involving underwater acoustic data, and two problems from the Problem benchmarks. For data sets with known Bayes error, the combiner-based methods introduced in this article outperform existing methods. The estimates obtained by the proposed methods also seem quite reliable for the real-life data sets for which the true Bayes rates are unknown.

  7. Information-Theoretic Perspectives on Geophysical Models

    NASA Astrophysics Data System (ADS)

    Nearing, Grey

    2016-04-01

    practice of science (except by Gong et al., 2013, whose fundamental insight is the basis for this talk), and here I offer two examples of practical methods that scientists might use to approximately measure ontological information. I place this practical discussion in the context of several recent and high-profile experiments that have found that simple out-of-sample statistical models typically (vastly) outperform our most sophisticated terrestrial hydrology models. I offer some perspective on several open questions about how to use these findings to improve our models and understanding of these systems. Cartwright, N. (1983) How the Laws of Physics Lie. New York, NY: Cambridge Univ Press. Clark, M. P., Kavetski, D. and Fenicia, F. (2011) 'Pursuing the method of multiple working hypotheses for hydrological modeling', Water Resources Research, 47(9). Cover, T. M. and Thomas, J. A. (1991) Elements of Information Theory. New York, NY: Wiley-Interscience. Cox, R. T. (1946) 'Probability, frequency and reasonable expectation', American Journal of Physics, 14, pp. 1-13. Csiszár, I. (1972) 'A Class of Measures of Informativity of Observation Channels', Periodica Mathematica Hungarica, 2(1), pp. 191-213. Davies, P. C. W. (1990) 'Why is the physical world so comprehensible', Complexity, entropy and the physics of information, pp. 61-70. Gong, W., Gupta, H. V., Yang, D., Sricharan, K. and Hero, A. O. (2013) 'Estimating Epistemic & Aleatory Uncertainties During Hydrologic Modeling: An Information Theoretic Approach', Water Resources Research, 49(4), pp. 2253-2273. Jaynes, E. T. (2003) Probability Theory: The Logic of Science. New York, NY: Cambridge University Press. Nearing, G. S. and Gupta, H. V. (2015) 'The quantity and quality of information in hydrologic models', Water Resources Research, 51(1), pp. 524-538. Popper, K. R. (2002) The Logic of Scientific Discovery. New York: Routledge. Van Horn, K. S. (2003) 'Constructing a logic of plausible inference: a guide to cox's theorem

  8. Estimating random signal parameters from noisy images with nuisance parameters

    PubMed Central

    Whitaker, Meredith Kathryn; Clarkson, Eric; Barrett, Harrison H.

    2008-01-01

    In a pure estimation task, an object of interest is known to be present, and we wish to determine numerical values for parameters that describe the object. This paper compares the theoretical framework, implementation method, and performance of two estimation procedures. We examined the performance of these estimators for tasks such as estimating signal location, signal volume, signal amplitude, or any combination of these parameters. The signal is embedded in a random background to simulate the effect of nuisance parameters. First, we explore the classical Wiener estimator, which operates linearly on the data and minimizes the ensemble mean-squared error. The results of our performance tests indicate that the Wiener estimator can estimate amplitude and shape once a signal has been located, but is fundamentally unable to locate a signal regardless of the quality of the image. Given these new results on the fundamental limitations of Wiener estimation, we extend our methods to include more complex data processing. We introduce and evaluate a scanning-linear estimator that performs impressively for location estimation. The scanning action of the estimator refers to seeking a solution that maximizes a linear metric, thereby requiring a global-extremum search. The linear metric to be optimized can be derived as a special case of maximum a posteriori (MAP) estimation when the likelihood is Gaussian and a slowly varying covariance approximation is made. PMID:18545527

  9. Simulation of Theoretical Most-Extreme Geomagnetic Sudden Commencements

    NASA Astrophysics Data System (ADS)

    Welling, Daniel; Love, Jeffrey; Wiltberger, Michael; Rigler, Erin; Gombosi, Tamas

    2016-04-01

    We report results from a numerical simulation of geomagnetic sudden commencements driven by solar wind conditions given by theoretical-limit extreme coronal-mass ejections (CMEs) estimated by Tsurutani and Lakhina [2014]. The CME characteristics at Earth are a step function that jumps from typical quiet values to 2700 km/s flow speed and a magnetic field magnitude of 127 nT. These values are used to drive three coupled models: a global magnetohydrodynamic (MHD) magnetospheric model (BATS-R-US), a ring current model (the Rice Convection Model, RCM), and a height-integrated ionospheric electrodynamics model (the Ridley Ionosphere Model, RIM), all coupled together using the Space Weather Modeling Framework (SWMF). Additionally, simulations from the Lyon-Fedder-Mobarry MHD model are performed for comparison. The commencement is simulated with both purely northward and southward IMF orientations. Low-latitude ground-level geomagnetic variations, both B and dB/dt, are estimated in response to the storm sudden commencement. For a northward interplanetary magnetic field (IMF) storm, the combined models predict a maximum sudden commencement response, Dst-equivalent of +200 nT and a maximum local dB/dt of ~200nT/s. While this positive Dst response is driven mainly by magnetopause currents, complicated and dynamic Birkeland current patterns also develop, which drive the strong dB/dt responses at high latitude. For southward IMF conditions, erosion of dayside magnetic flux allows magnetopause currents to approach much closer to the Earth, leading to a stronger terrestrial response (Dst-equivalent of +250 nT). Further, high latitude signals from Region 1 Birkeland currents move to lower latitudes during the southward IMF case, increasing the risk to populated areas around the globe. Results inform fundamental understanding of solar-terrestrial interaction and benchmark estimates for induction hazards of interest to the electric-power grid industry.

  10. Estimating airline operating costs

    NASA Technical Reports Server (NTRS)

    Maddalon, D. V.

    1978-01-01

    A review was made of the factors affecting commercial aircraft operating and delay costs. From this work, an airline operating cost model was developed which includes a method for estimating the labor and material costs of individual airframe maintenance systems. The model, similar in some respects to the standard Air Transport Association of America (ATA) Direct Operating Cost Model, permits estimates of aircraft-related costs not now included in the standard ATA model (e.g., aircraft service, landing fees, flight attendants, and control fees). A study of the cost of aircraft delay was also made and a method for estimating the cost of certain types of airline delay is described.

  11. Estimating cell populations

    NASA Technical Reports Server (NTRS)

    White, B. S.; Castleman, K. R.

    1981-01-01

    An important step in the diagnosis of a cervical cytology specimen is estimating the proportions of the various cell types present. This is usually done with a cell classifier, the error rates of which can be expressed as a confusion matrix. We show how to use the confusion matrix to obtain an unbiased estimate of the desired proportions. We show that the mean square error of this estimate depends on a 'befuddlement matrix' derived from the confusion matrix, and how this, in turn, leads to a figure of merit for cell classifiers. Finally, we work out the two-class problem in detail and present examples to illustrate the theory.

  12. A posteriori pointwise error estimates for the boundary element method

    SciTech Connect

    Paulino, G.H.; Gray, L.J.; Zarikian, V.

    1995-01-01

    This report presents a new approach for a posteriori pointwise error estimation in the boundary element method. The estimator relies upon the evaluation of hypersingular integral equations, and is therefore intrinsic to the boundary integral equation approach. This property allows some theoretical justification by mathematically correlating the exact and estimated errors. A methodology is developed for approximating the error on the boundary as well as in the interior of the domain. In the interior, error estimates for both the function and its derivatives (e.g. potential and interior gradients for potential problems, displacements and stresses for elasticity problems) are presented. Extensive computational experiments have been performed for the two dimensional Laplace equation on interior domains, employing Dirichlet and mixed boundary conditions. The results indicate that the error estimates successfully track the form of the exact error curve. Moreover, a reasonable estimate of the magnitude of the actual error is also obtained.

  13. Estimation of turbulent kinetic energy dissipation

    NASA Astrophysics Data System (ADS)

    Chen, Huey-Long; Hondzo, Miki; Rao, A. Ramachandra

    2001-06-01

    The kinetic energy dissipation rate is one of the key intrinsic fluid flow parameters in environmental fluid dynamics. In an indirect method the kinetic energy dissipation rate is estimated from the Batchelor spectrum. Because the Batchelor spectrum has a significant difference between the highest and lowest spectral values, the spectral bias in the periodogram causes the lower spectral values at higher frequencies to increase. Consequently, the accuracy in fitting the Batchelor spectrum is affected. In this study, the multitaper spectral estimation method is compared to conventional methods in estimating the synthetic temperature gradient spectra. It is shown in the results that the multitaper spectra have less bias than the Hamming window smoothed spectra and the periodogram in estimating the synthetic temperature gradient spectra. The results of fitting the Batchelor spectrum based on four error functions are compared. When the theoretical noise spectrum is available and delineated at the intersection of the estimated spectrum, the fitting results of the kinetic energy dissipation rate corresponding to the four error functions do not have significant differences. However, when the noise spectrum is unknown and part of the Batchelor spectrum overlaps the region where the noise spectrum dominates, the weighted chi-square distributed error function has the best fitting results.

  14. Intrinsic graph structure estimation using graph Laplacian.

    PubMed

    Noda, Atsushi; Hino, Hideitsu; Tatsuno, Masami; Akaho, Shotaro; Murata, Noboru

    2014-07-01

    A graph is a mathematical representation of a set of variables where some pairs of the variables are connected by edges. Common examples of graphs are railroads, the Internet, and neural networks. It is both theoretically and practically important to estimate the intensity of direct connections between variables. In this study, a problem of estimating the intrinsic graph structure from observed data is considered. The observed data in this study are a matrix with elements representing dependency between nodes in the graph. The dependency represents more than direct connections because it includes influences of various paths. For example, each element of the observed matrix represents a co-occurrence of events at two nodes or a correlation of variables corresponding to two nodes. In this setting, spurious correlations make the estimation of direct connection difficult. To alleviate this difficulty, a digraph Laplacian is used for characterizing a graph. A generative model of this observed matrix is proposed, and a parameter estimation algorithm for the model is also introduced. The notable advantage of the proposed method is its ability to deal with directed graphs, while conventional graph structure estimation methods such as covariance selections are applicable only to undirected graphs. The algorithm is experimentally shown to be able to identify the intrinsic graph structure.

  15. A Geomagnetic Estimate of Mean Paleointensity

    NASA Technical Reports Server (NTRS)

    Voorhies, Coerte

    2004-01-01

    To test a statistical hypothesis about Earth's magnetic field against paleomagnetism, the present field is used to estimate time averaged paleointensity. The estimate uses the modem magnetic multipole spectrum R(n), which gives the mean square induction represented by spherical harmonics of degree n averaged over the sphere of radius a = 6371.2 km. The hypothesis asserts that the low degree multipole powers of the core-source field are distributed as chi-squared with 2n+l degrees of freedom and expectation values {R(n)} = K[(n+l/2)/n(n+l](c/a)(sup 2n+4), where c is the 3480 km radius of Earth's core. (This is compatible with a usually mainly geocentric axial dipolar field). Amplitude K is estimated by fitting theoretical to observational spectra through degree 12. The resulting calibrated expectation spectrum is summed through degree 12 to estimate expected square intensity {F(sup 2)}. The sum also estimates {F(sup 2)} averaged over geologic time, in so far as the present magnetic spectrum is a fair sample of that generated in the past by core geodynamic processes.

  16. A Geomagnetic Estimate of Mean Paleointensity

    NASA Technical Reports Server (NTRS)

    Voorhies, Coerte V.

    2004-01-01

    To test a statistical hypothesis about Earth's magnetic field against paleomagnetism, the present field is used to estimate time averaged paleointensity. The estimate used the modern magnetic multipole spectrum R(n), which gives the mean square induction represented by spherical harmonics of degree n averaged over the sphere of radius a = 6371.2 km. The hypothesis asserts that low degree multi-pole powers of the coresource field are distributed as chi-squared with 2n+1 degrees of freedom and expectation values, where c is the 3480 km radius of the Earth's core. (This is compatible with a usually mainly geocentric axial dipolar field). Amplitude K is estimated by fitting theoretical to observational spectra through degree 12. The resulting calibrated expectation spectrum is summed through degree 12 to estimate expected square intensity F(exp 2). The sum also estimates F(exp 2) averaged over geologic time, in so far as the present magnetic spectrum is a fair sample of that generated in the past by core geodynamic processes. Additional information is included in the original extended abstract.

  17. Estimating location without external cues.

    PubMed

    Cheung, Allen

    2014-10-01

    The ability to determine one's location is fundamental to spatial navigation. Here, it is shown that localization is theoretically possible without the use of external cues, and without knowledge of initial position or orientation. With only error-prone self-motion estimates as input, a fully disoriented agent can, in principle, determine its location in familiar spaces with 1-fold rotational symmetry. Surprisingly, localization does not require the sensing of any external cue, including the boundary. The combination of self-motion estimates and an internal map of the arena provide enough information for localization. This stands in conflict with the supposition that 2D arenas are analogous to open fields. Using a rodent error model, it is shown that the localization performance which can be achieved is enough to initiate and maintain stable firing patterns like those of grid cells, starting from full disorientation. Successful localization was achieved when the rotational asymmetry was due to the external boundary, an interior barrier or a void space within an arena. Optimal localization performance was found to depend on arena shape, arena size, local and global rotational asymmetry, and the structure of the path taken during localization. Since allothetic cues including visual and boundary contact cues were not present, localization necessarily relied on the fusion of idiothetic self-motion cues and memory of the boundary. Implications for spatial navigation mechanisms are discussed, including possible relationships with place field overdispersion and hippocampal reverse replay. Based on these results, experiments are suggested to identify if and where information fusion occurs in the mammalian spatial memory system.

  18. Stochastic Complexity Based Estimation of Missing Elements in Questionnaire Data.

    ERIC Educational Resources Information Center

    Tirri, Henry; Silander, Tomi

    A new information-theoretically justified approach to missing data estimation for multivariate categorical data was studied. The approach is a model-based imputation procedure relative to a model class (i.e., a functional form for the probability distribution of the complete data matrix), which in this case is the set of multinomial models with…

  19. Estimation of coefficients and boundary parameters in hyperbolic systems

    NASA Technical Reports Server (NTRS)

    Banks, H. T.; Murphy, K. A.

    1984-01-01

    Semi-discrete Galerkin approximation schemes are considered in connection with inverse problems for the estimation of spatially varying coefficients and boundary condition parameters in second order hyperbolic systems typical of those arising in 1-D surface seismic problems. Spline based algorithms are proposed for which theoretical convergence results along with a representative sample of numerical findings are given.

  20. Spatial Working Memory Capacity Predicts Bias in Estimates of Location

    ERIC Educational Resources Information Center

    Crawford, L. Elizabeth; Landy, David; Salthouse, Timothy A.

    2016-01-01

    Spatial memory research has attributed systematic bias in location estimates to a combination of a noisy memory trace with a prior structure that people impose on the space. Little is known about intraindividual stability and interindividual variation in these patterns of bias. In the current work, we align recent empirical and theoretical work on…

  1. Shell Model Estimate of Electric Dipole Moments for Xe Isotopes

    NASA Astrophysics Data System (ADS)

    Teruya, Eri; Yoshinaga, Naotaka; Higashiyama, Koji

    The nuclear Schiff moments of Xe isotopes which induce electric dipole moments of neutral Xe atoms is theoretically estimated. Parity and time-reversal violating two-body nuclear interactions are assumed. The nuclear wave functions are calculated in terms of the nuclear shell model. Influences of core excitations on the Schiff moments in addition to the over-shell excitations are discussed.

  2. Theoretical Model for Volume Fraction of UC, 235U Enrichment, and Effective Density of Final U 10Mo Alloy

    SciTech Connect

    Devaraj, Arun; Prabhakaran, Ramprashad; Joshi, Vineet V.; Hu, Shenyang Y.; McGarrah, Eric J.; Lavender, Curt A.

    2016-04-12

    The purpose of this document is to provide a theoretical framework for (1) estimating uranium carbide (UC) volume fraction in a final alloy of uranium with 10 weight percent molybdenum (U 10Mo) as a function of final alloy carbon concentration, and (2) estimating effective 235U enrichment in the U 10Mo matrix after accounting for loss of 235U in forming UC. This report will also serve as a theoretical baseline for effective density of as-cast low-enriched U 10Mo alloy. Therefore, this report will serve as the baseline for quality control of final alloy carbon content

  3. Theoretical and experimental analysis of a piezoelectric plate connected to a negative capacitance at MHz frequencies

    NASA Astrophysics Data System (ADS)

    Mansoura, S. A.; Benard, P.; Morvan, B.; Maréchal, P.; Hladky-Hennion, A.-C.; Dubus, B.

    2015-11-01

    In this paper, a theoretical and experimental study of the electric impedance of a piezoelectric plate connected to a negative capacitance is performed in the MHz frequency range. The negative capacitance is realized with a circuit using current conveyors (CCII+). This circuit allows us to achieve important values of negative capacitance, of the same order of the static capacitance of the piezoelectric plate studied. Mason’s model is considered for the theoretical characterization of the piezoelectric plate connected to the negative capacitance circuit. The experimental results show a large tunability of the frequency of the piezoelectric parallel resonance over a range of 1.1 MHz to 1.28 MHz. Moreover, according to the value of the negative capacitance, the effective electromechanical coupling factor of the piezoelectric plate is evaluated. With a very good agreement with the theoretical estimation, an increase of approximately 50% of the effective electromechanical coupling factor is experimentally measured.

  4. Estimation of fractal dimensions from transect data

    SciTech Connect

    Loehle, C.

    1994-04-01

    Fractals are a useful tool for analyzing the topology of objects such as coral reefs, forest canopies, and landscapes. Transects are often studied in these contexts, and fractal dimensions computed from them. An open question is how representative a single transect is. Transects may also be used to estimate the dimensionality of a surface. Again the question of representativeness of the transect arises. These two issues are related. This note qualifies the conditions under which transect data may be considered to be representative or may be extrapolated, based on both theoretical and empirical results.

  5. Improvement of propeller static thrust estimation

    NASA Technical Reports Server (NTRS)

    Brusse, J.; Kettleborough, C. F.

    1975-01-01

    The problem of improving the performance estimation of propellers operating in the heavily loaded static thrust condition was studied. The Goldstein theory was assessed as it applies to propellers operating in the static thrust. A review of theoretical considerations is presented along with a summary of the attempts made to obtain a numerical solution. The chordwise pressure distribution was determined during operation at a tip speed of 500 ft/sec. Chordwise integration of the pressures leads to the spanwise load distribution and further integration would give the axial thrust.

  6. Estimating Radiogenic Cancer Risks

    EPA Pesticide Factsheets

    This document presents a revised methodology for EPA's estimation of cancer risks due to low-LET radiation exposures developed in light of information that has become available, especially new information on the Japanese atomic bomb survivors.

  7. Estimation of food consumption

    SciTech Connect

    Callaway, J.M. Jr.

    1992-04-01

    The research reported in this document was conducted as a part of the Hanford Environmental Dose Reconstruction (HEDR) Project. The objective of the HEDR Project is to estimate the radiation doses that people could have received from operations at the Hanford Site. Information required to estimate these doses includes estimates of the amounts of potentially contaminated foods that individuals in the region consumed during the study period. In that general framework, the objective of the Food Consumption Task was to develop a capability to provide information about the parameters of the distribution(s) of daily food consumption for representative groups in the population for selected years during the study period. This report describes the methods and data used to estimate food consumption and presents the results developed for Phase I of the HEDR Project.

  8. Supernova frequency estimates

    SciTech Connect

    Tsvetkov, D.Y.

    1983-01-01

    Estimates of the frequency of type I and II supernovae occurring in galaxies of different types are derived from observational material acquired by the supernova patrol of the Shternberg Astronomical Institute.

  9. Early Training Estimation System

    DTIC Science & Technology

    1980-08-01

    are needed. First, by developing earlier and more accurate estimates of training requirements, the training planning process can begin earlier, and...this period and these questions require training input data and (2) the early training planning process requires a solid foundation on which to...development of initial design, task, skill, and training estimates? provision of input into training planning and acquisition documents: 2-39 provision

  10. Nonparametric conditional estimation

    SciTech Connect

    Owen, A.B.

    1987-01-01

    Many nonparametric regression techniques (such as kernels, nearest neighbors, and smoothing splines) estimate the conditional mean of Y given X = chi by a weighted sum of observed Y values, where observations with X values near chi tend to have larger weights. In this report the weights are taken to represent a finite signed measure on the space of Y values. This measure is studied as an estimate of the conditional distribution of Y given X = chi. From estimates of the conditional distribution, estimates of conditional means, standard deviations, quantiles and other statistical functionals may be computed. Chapter 1 illustrates the computation of conditional quantiles and conditional survival probabilities on the Stanford Heart Transplant data. Chapter 2 contains a survey of nonparametric regression methods and introduces statistical metrics and von Mises' method for later use. Chapter 3 proves some consistency results. Chapter 4 provides conditions under which the suitably normalized errors in estimating the conditional distribution of Y have a Brownian limit. Using von Mises' method, asymptotic normality is obtained for nonparametric conditional estimates of compactly differentiable statistical functionals.

  11. Estimating networks with jumps

    PubMed Central

    Kolar, Mladen; Xing, Eric P.

    2013-01-01

    We study the problem of estimating a temporally varying coefficient and varying structure (VCVS) graphical model underlying data collected over a period of time, such as social states of interacting individuals or microarray expression profiles of gene networks, as opposed to i.i.d. data from an invariant model widely considered in current literature of structural estimation. In particular, we consider the scenario in which the model evolves in a piece-wise constant fashion. We propose a procedure that estimates the structure of a graphical model by minimizing the temporally smoothed L1 penalized regression, which allows jointly estimating the partition boundaries of the VCVS model and the coefficient of the sparse precision matrix on each block of the partition. A highly scalable proximal gradient method is proposed to solve the resultant convex optimization problem; and the conditions for sparsistent estimation and the convergence rate of both the partition boundaries and the network structure are established for the first time for such estimators. PMID:25013533

  12. Theoretical study of the C-H bond dissociation energy of C2H

    NASA Technical Reports Server (NTRS)

    Bauschlicher, Charles W., Jr.; Langhoff, Stephen R.

    1990-01-01

    A theoretical study of the convergence of the C-H bond dissociation energy D(0) in C2H with respect to both the one- and n-particle spaces is presented. The calculated C-H bond energies of C2H2 and C2H4, which are in excellent agreement with experiment, are used for calibration. The best estimate for D(0) of 112.4 + or - 2.0 kcal/mol is slightly below the recent experimental value of 116.3 + or - 2.6 kcal/mol, but substantially above a previous theoretical estimate of 102 kcal/mol. The remaining discrepancy with experiment may reflect primarily the uncertainty in the experimental D(0) value of C2 required in the analysis.

  13. Should we adjust for a confounder if empirical and theoretical criteria yield contradictory results? A simulation study

    PubMed Central

    Lee, Paul H.

    2014-01-01

    Confounders can be identified by one of two main strategies: empirical or theoretical. Although confounder identification strategies that combine empirical and theoretical strategies have been proposed, the need for adjustment remains unclear if the empirical and theoretical criteria yield contradictory results due to random error. We simulated several scenarios to mimic either the presence or the absence of a confounding effect and tested the accuracy of the exposure-outcome association estimates with and without adjustment. Various criteria (significance criterion, Change-in-estimate(CIE) criterion with a 10% cutoff and with a simulated cutoff) were imposed, and a range of sample sizes were trialed. In the presence of a true confounding effect, unbiased estimates were obtained only by using the CIE criterion with a simulated cutoff. In the absence of a confounding effect, all criteria performed well regardless of adjustment. When the confounding factor was affected by both exposure and outcome, all criteria yielded accurate estimates without adjustment, but the adjusted estimates were biased. To conclude, theoretical confounders should be adjusted for regardless of the empirical evidence found. The adjustment for factors that do not have a confounding effect minimally effects. Potential confounders affected by both exposure and outcome should not be adjusted for. PMID:25124526

  14. Quantum turbulence: Theoretical and numerical problems

    NASA Astrophysics Data System (ADS)

    Nemirovskii, Sergey K.

    2013-03-01

    The term “quantum turbulence” (QT) unifies the wide class of phenomena where the chaotic set of one dimensional quantized vortex filaments (vortex tangles) appear in quantum fluids and greatly influence various physical features. Quantum turbulence displays itself differently depending on the physical situation, and ranges from quasi-classical turbulence in flowing fluids to a near equilibrium set of loops in phase transition. The statistical configurations of the vortex tangles are certainly different in, say, the cases of counterflowing helium and a rotating bulk, but in all the physical situations very similar theoretical and numerical problems arise. Furthermore, quite similar situations appear in other fields of physics, where a chaotic set of one dimensional topological defects, such as cosmic strings, or linear defects in solids, or lines of darkness in nonlinear light fields, appear in the system. There is an interpenetration of ideas and methods between these scientific topics which are far apart in other respects. The main purpose of this review is to bring together some of the most commonly discussed results on quantum turbulence, focusing on analytic and numerical studies. We set out a series of results on the general theory of quantum turbulence which aim to describe the properties of the chaotic vortex configuration, starting from vortex dynamics. In addition we insert a series of particular questions which are important both for the whole theory and for the various applications. We complete the article with a discussion of the hot topic, which is undoubtedly mainstream in this field, and which deals with the quasi-classical properties of quantum turbulence. We discuss this problem from the point of view of the theoretical results stated in the previous sections. We also included section, which is devoted to the experimental and numerical suggestions based on the discussed theoretical models.

  15. The International Centre for Theoretical Physics

    NASA Astrophysics Data System (ADS)

    Hussain, Faheem

    2008-07-01

    This talk traces in brief the genesis of the Abdus Salam International Centre for Theoretical Physics, Trieste, as one of Prof. Abdus Salam's major achievements. It outlines why Salam felt the necessity for establishing such a centre to help physicists in the developing world. It situates the founding of the Centre within Salam's broader vision of the causes of underdevelopment and of science as an engine for scientific, technological, economic and social development. The talk reviews the successes and failures of the ICTP and gives a brief overall view of the current status of the Centre.

  16. Finite area combustor theoretical rocket performance

    NASA Technical Reports Server (NTRS)

    Gordon, Sanford; Mcbride, Bonnie J.

    1988-01-01

    Previous to this report, the computer program of NASA SP-273 and NASA TM-86885 was capable of calculating theoretical rocket performance based only on the assumption of an infinite area combustion chamber (IAC). An option was added to this program which now also permits the calculation of rocket performance based on the assumption of a finite area combustion chamber (FAC). In the FAC model, the combustion process in the cylindrical chamber is assumed to be adiabatic, but nonisentropic. This results in a stagnation pressure drop from the injector face to the end of the chamber and a lower calculated performance for the FAC model than the IAC model.

  17. The double-beta decay: Theoretical challenges

    SciTech Connect

    Horoi, Mihai

    2012-11-20

    Neutrinoless double beta decay is a unique process that could reveal physics beyond the Standard Model of particle physics namely, if observed, it would prove that neutrinos are Majorana particles. In addition, it could provide information regarding the neutrino masses and their hierarchy, provided that reliable nuclear matrix elements can be obtained. The two neutrino double beta decay is an associate process that is allowed by the Standard Model, and it was observed for about ten nuclei. The present contribution gives a brief review of the theoretical challenges associated with these two process, emphasizing the reliable calculation of the associated nuclear matrix elements.

  18. Recent Theoretical Studies On Excitation and Recombination

    NASA Technical Reports Server (NTRS)

    Pradhan, Anil K.

    2000-01-01

    New advances in the theoretical treatment of atomic processes in plasmas are described. These enable not only an integrated, unified, and self-consistent treatment of important radiative and collisional processes, but also large-scale computation of atomic data with high accuracy. An extension of the R-matrix work, from excitation and photoionization to electron-ion recombination, includes a unified method that subsumes both the radiative and the di-electronic recombination processes in an ab initio manner. The extensive collisional calculations for iron and iron-peak elements under the Iron Project are also discussed.

  19. Chemoinformatics as a Theoretical Chemistry Discipline.

    PubMed

    Varnek, Alexandre; Baskin, Igor I

    2011-01-17

    Here, chemoinformatics is considered as a theoretical chemistry discipline complementary to quantum chemistry and force-field molecular modeling. These three fields are compared with respect to molecular representation, inference mechanisms, basic concepts and application areas. A chemical space, a fundamental concept of chemoinformatics, is considered with respect to complex relations between chemical objects (graphs or descriptor vectors). Statistical Learning Theory, one of the main mathematical approaches in structure-property modeling, is briefly reviewed. Links between chemoinformatics and its "sister" fields - machine learning, chemometrics and bioinformatics are discussed.

  20. Simple theoretical models for composite rotor blades

    NASA Technical Reports Server (NTRS)

    Valisetty, R. R.; Rehfield, L. W.

    1984-01-01

    The development of theoretical rotor blade structural models for designs based upon composite construction is discussed. Care was exercised to include a member of nonclassical effects that previous experience indicated would be potentially important to account for. A model, representative of the size of a main rotor blade, is analyzed in order to assess the importance of various influences. The findings of this model study suggest that for the slenderness and closed cell construction considered, the refinements are of little importance and a classical type theory is adequate. The potential of elastic tailoring is dramatically demonstrated, so the generality of arbitrary ply layup in the cell wall is needed to exploit this opportunity.

  1. Module theoretic zero structures for system matrices

    NASA Technical Reports Server (NTRS)

    Wyman, Bostwick F.; Sain, Michael K.

    1987-01-01

    The coordinate-free module-theoretic treatment of transmission zeros for MIMO transfer functions developed by Wyman and Sain (1981) is generalized to include noncontrollable and nonobservable linear dynamical systems. Rational, finitely-generated-modular, and torsion-divisible interpretations of the Rosenbrock system matrix are presented; Gamma-zero and Omega-zero modules are defined and shown to contain the output-decoupling and input-decoupling zero modules, respectively, as submodules; and the cases of left and right invertible transfer functions are considered.

  2. Theoretical aspects of the agile mirror

    NASA Astrophysics Data System (ADS)

    Manheimer, Wallace M.; Fernsler, Richard

    1994-01-01

    A planar plasma mirror which can be oriented electronically could have the capability of providing electronic steering of a microwave beam in a radar or electronic warfare system. This system is denoted the agile mirror. A recent experiment has demonstrated such a planar plasma and the associated microwave reflection. This plasma was produced by a hollow cathode glow discharge, where the hollow cathode was a grooved metallic trench in a Lucite plate. Various theoretical aspects of this configuration of an agile mirror are examined here.

  3. Graph-theoretic strengths of contextuality

    NASA Astrophysics Data System (ADS)

    de Silva, Nadish

    2017-03-01

    Cabello-Severini-Winter and Abramsky-Hardy (building on the framework of Abramsky-Brandenburger) both provide classes of Bell and contextuality inequalities for very general experimental scenarios using vastly different mathematical techniques. We review both approaches, carefully detail the links between them, and give simple, graph-theoretic methods for finding inequality-free proofs of nonlocality and contextuality and for finding states exhibiting strong nonlocality and/or contextuality. Finally, we apply these methods to concrete examples in stabilizer quantum mechanics relevant to understanding contextuality as a resource in quantum computation.

  4. Towards a Theoretical Basis for Energy Economics.

    DTIC Science & Technology

    1980-08-01

    34: . . < . (4.84) To prove this assLme a contradictory subcase that all 7. were below a. This would imply Z S > 3 and a= according to capacitiy assumption, which...Efficiencies, Engineering, Vol 130, 1930, pp 283-285 Dasgupta, P S , Heal, G M, Economic Theory and ExhaustibleResources, Cambridge University Press, Cambridge...A.70-.. 1 6826 NAVAL POSTGRADUATE SCHOOL MONTEREY CA F/ S 5/3 TOWARDS A THEORETICAL BASIS FOR ENERGY ECONOMICS. (U) AUG GO R V GRUBBSTROM ae c~rf nCf

  5. Theoretical and Experimental Beam Plasma Physics (TEBPP)

    NASA Technical Reports Server (NTRS)

    Roberts, B.

    1986-01-01

    The theoretical and experimental beam plasma physics (TEBPP) consists of a package of five instruments to measure electric and magnetic fields, plasma density and temperature, neutral density, photometric emissions, and energetic particle spectra during firings of the particle injector (SEPAC) electron beam. The package is developed on a maneuverable boom (or RMS) and is used to measure beam characteristics and induced perturbations field ( 10 m) and mid field ( 10 m to 100 m) along the electron beam. The TEBPP package will be designed to investigate induced oscillations and induced electromagnetic mode waves, neutral and ion density and temperature effects, and beam characteristics as a function of axial distance.

  6. Theoretical and Experimental Beam Plasma Physics (TEBPP)

    NASA Technical Reports Server (NTRS)

    Roberts, W. T.

    1985-01-01

    The theoretical and experimental beam plasma physics (TEBPP) consists of a package of five instruments to measure electric and magnetic fields, plasma density and temperature, neutral density, photometric emissions, and energetic particle spectra during firings of the particle injector (SEPAC) electron beam. The package is deployed on a maneuverable boom (or RMS) and is used to measure beam characteristics and induced perturbations in the near field ( 10 m) and mid field (10 m to 100 m) along the electron beam. The TEBPP package will be designed to investigate induced oscillations and induced electromagnetic mode waves, neutral and ion density and temperature effects, and beam characteristics as a function of axial distance.

  7. [Experimental and theoretical high energy physics program

    SciTech Connect

    Finley, J.; Gaidos, J.A.; Loeffler, F.J.; McIlwain, R.L.; Miller, D.H.; Palfrey, T.R.; Shibata, E.I.; Shipsey, I.P.

    1993-04-01

    Experimental and theoretical high-energy physics research at Purdue is summarized in a number of reports. Subjects treated include the following: the CLEO experiment for the study of heavy flavor physics; gas microstrip detectors; particle astrophysics; affine Kac{endash}Moody algebra; nonperturbative mass bounds on scalar and fermion systems due to triviality and vacuum stability constraints; resonance neutrino oscillations; e{sup +}e{sup {minus}} collisions at CERN; {bar p}{endash}p collisions at FNAL; accelerator physics at Fermilab; development work for the SDC detector at SSC; TOPAZ; D-zero physics; physics beyond the standard model; and the Collider Detector at Fermilab. (RWR)

  8. Alcohol cold starting - A theoretical study

    NASA Technical Reports Server (NTRS)

    Browning, L. H.

    1983-01-01

    Two theoretical computer models have been developed to study cold starting problems with alcohol fuels. The first model, a droplet fall-out and sling-out model, shows that droplets must be smaller than 50 microns to enter the cylinder under cranking conditions without being slung-out in the intake manifold. The second model, which examines the fate of droplets during the compression process, shows that the heat of compression can be used to vaporize small droplets (less than 50 microns) producing flammable mixtures below freezing ambient temperatures. While droplet size has the greater effect on startability, a very high compression ratio can also aid cold starting.

  9. A quantum theoretical study of polyimides

    NASA Technical Reports Server (NTRS)

    Burke, Luke A.

    1987-01-01

    One of the most important contributions of theoretical chemistry is the correct prediction of properties of materials before any costly experimental work begins. This is especially true in the field of electrically conducting polymers. Development of the Valence Effective Hamiltonian (VEH) technique for the calculation of the band structure of polymers was initiated. The necessary VEH potentials were developed for the sulfur and oxygen atoms within the particular molecular environments and the explanation explored for the success of this approximate method in predicting the optical properties of conducting polymers.

  10. Theoretical prediction of regression rates in swirl-injection hybrid rocket engines

    NASA Astrophysics Data System (ADS)

    Ozawa, K.; Shimada, T.

    2016-07-01

    The authors theoretically and analytically predict what times regression rates of swirl injection hybrid rocket engines increase higher than the axial injection ones by estimating heat flux from boundary layer combustion to the fuel port. The schematic of engines is assumed as ones whose oxidizer is injected from the opposite side of the nozzle such as ones of Yuasa et al. propose. To simplify the estimation, we assume some hypotheses such as three-dimensional (3D) axisymmetric flows have been assumed. The results of this prediction method are largely consistent with Yuasa's experiments data in the range of high swirl numbers.

  11. Field-widened Michelson interferometer for spectral discrimination in high-spectral-resolution lidar: theoretical framework.

    PubMed

    Cheng, Zhongtao; Liu, Dong; Luo, Jing; Yang, Yongying; Zhou, Yudi; Zhang, Yupeng; Duan, Lulin; Su, Lin; Yang, Liming; Shen, Yibing; Wang, Kaiwei; Bai, Jian

    2015-05-04

    A field-widened Michelson interferometer (FWMI) is developed to act as the spectral discriminator in high-spectral-resolution lidar (HSRL). This realization is motivated by the wide-angle Michelson interferometer (WAMI) which has been used broadly in the atmospheric wind and temperature detection. This paper describes an independent theoretical framework about the application of the FWMI in HSRL for the first time. In the framework, the operation principles and application requirements of the FWMI are discussed in comparison with that of the WAMI. Theoretical foundations for designing this type of interferometer are introduced based on these comparisons. Moreover, a general performance estimation model for the FWMI is established, which can provide common guidelines for the performance budget and evaluation of the FWMI in the both design and operation stages. Examples incorporating many practical imperfections or conditions that may degrade the performance of the FWMI are given to illustrate the implementation of the modeling. This theoretical framework presents a complete and powerful tool for solving most of theoretical or engineering problems encountered in the FWMI application, including the designing, parameter calibration, prior performance budget, posterior performance estimation, and so on. It will be a valuable contribution to the lidar community to develop a new generation of HSRLs based on the FWMI spectroscopic filter.

  12. Single-shot camera position estimation by crossed grating imaging

    NASA Astrophysics Data System (ADS)

    Juarez-Salazar, Rigoberto; Gaxiola, Leopoldo N.; Diaz-Ramirez, Victor H.

    2017-01-01

    A simple method to estimate the position of a camera device with respect to a reference plane is proposed. The method utilizes a crossed grating in the reference plane and exploits the coordinate transformation induced by the perspective projection. If the focal length is available, the position of the camera can be estimated with a single-shot. Otherwise, the focal length can be firstly estimated from few frames captured at different known displacements. The theoretical principles of the proposed method are given and the functionality of the approach is exhibited by correcting perspective-distorted images. The proposed method is computationally efficient and highly appropriate to be used in dynamic measurement systems.

  13. Methods for Estimation of Market Power in Electric Power Industry

    NASA Astrophysics Data System (ADS)

    Turcik, M.; Oleinikova, I.; Junghans, G.; Kolcun, M.

    2012-01-01

    The article is related to a topical issue of the newly-arisen market power phenomenon in the electric power industry. The authors point out to the importance of effective instruments and methods for credible estimation of the market power on liberalized electricity market as well as the forms and consequences of market power abuse. The fundamental principles and methods of the market power estimation are given along with the most common relevant indicators. Furthermore, in the work a proposal for determination of the relevant market place taking into account the specific features of power system and a theoretical example of estimating the residual supply index (RSI) in the electricity market are given.

  14. Theoretical, Experimental, and Computational Evaluation of a Tunnel Ladder Slow-Wave Structure

    NASA Technical Reports Server (NTRS)

    Wallett, Thomas M.; Qureshi, A. Haq

    1994-01-01

    The dispersion characteristics of a tunnel ladder circuit in a ridged wave guide were experimentally measured and determined by computer simulation using the electromagnetic code MAFIA. To qualitatively estimate interaction impedances, resonance frequency shifts due to a perturbing dielectric rod along the axis were also measured indicating the axial electric field strength. A theoretical modeling of the electric and magnetic fields in the tunnel area was also done.

  15. Theoretical Studies of TE-Wave Propagation as a Diagnostic for Electron Cloud

    SciTech Connect

    Penn, Gregory E; Vay, Jean-Luc

    2010-05-17

    The propagation of TE waves is sensitive to the presence of an electron cloud primarily through phase shifts generated by the altered dielectric function, but can also lead to polarization changes and other effects, especially in the presence of magnetic fields. These effects are studied theoretically and also through simulations using WARP. Examples are shown related to CesrTA parameters, and used to observe different regimes of operation as well as to validate estimates of the phase shift.

  16. Opacity Measurement and Theoretical Investigation of Hot Silicon Plasma

    NASA Astrophysics Data System (ADS)

    Xiong, Gang; Yang, Jiamin; Zhang, Jiyan; Hu, Zhimin; Zhao, Yang; Qing, Bo; Yang, Guohong; Wei, Minxi; Yi, Rongqing; Song, Tianming; Li, Hang; Yuan, Zheng; Lv, Min; Meng, Xujun; Xu, Yan; Wu, Zeqing; Yan, Jun

    2016-01-01

    We report on opacity measurements of a silicon (Si) plasma at a temperature of (72 ± 5) eV and a density of (6.0 ± 1.2) mg cm-3 in the photon energy range of 1790-1880 eV. A 23 μg cm-2 Si foil tamped by 50 μg cm-2 CH layers on each side was heated to a hot-dense plasma state by X-ray radiation emitted from a D-shaped gold cavity that was irradiated by intense lasers. Absorption lines of 1s - 2p transitions of Si xiii to Si ix ions have been measured using point-projection spectroscopy. The transmission spectrum of the silicon plasma was determined by comparing the light passing through the plasma to the light from the same shot passing by the plasma. The density of the Si plasma was determined experimentally by side-on radiography and the temperature was estimated from the radiation flux data. Radiative hydrodynamic simulations were performed to obtain the temporal evolutions of the density and temperature of the Si plasma. The experimentally obtained transmission spectra of the Si sample plasma have been reproduced using a detailed term account model with the local thermodynamic equilibrium approximation. The energy levels, oscillator strengths and photoionization cross-sections used in the calculation were generated by the flexible atomic code. The experimental transmission spectrum was compared with the theoretical calculation and good agreement was found. The present experimental spectrum and theoretical calculation were also compared with the new opacities available in the Los Alamos OPLIB database.

  17. Development of Warp Yarn Tension During Shedding: A Theoretical Approach

    NASA Astrophysics Data System (ADS)

    Ghosh, Subrata; Chary, Prabhakara; Roy, Sukumar

    2015-10-01

    Theoretical investigation on the process of development of warp yarn tension during weaving for tappet shedding is carried out, based on the dynamic nature of shed geometry. The path of warp yarn on a weaving machine is divided into four different zones. The tension developed in each zone is estimated for every minute rotation of the bottom shaft. A model has been developed based on the dynamic nature of shed geometry and the possible yarn flow from one zone to another. A computer program, based on the model of shedding process, is developed for predicting the warp yarn tension variation during shedding. The output of the model and the experimental values of yarn tension developed in zone-D i.e. between the back rest and the back lease rod are compared, which shows a good agreement between them. The warp yarn tension values predicted by the model in zone-D are 10-13 % lesser than the experimentally measured values. By analyzing the theoretical data of the peak value of developed yarn tension at four zones i.e. zone-A, zone-B, zone-C and zone-D, it is observed that the peak yarn tension value of A, B, C-zones are much higher than the peak tension near the back rest i.e. at zone-D. It is about twice or more than the yarn tension near the back rest. The study also reveals that the developed yarn tension peak values are different for the extreme positions of a heald. The impact of coefficient of friction on peak value of yarn tension is nominal.

  18. Inertial electrostatic confinement: Theoretical and experimental studies of spherical devices

    NASA Astrophysics Data System (ADS)

    Meyer, Ryan

    Inertial Electrostatic Confinement (IEC) is a means to confine ions for fusion purposes with electrostatic fields in a converging geometry. Its engineering simplicity makes it appealing when compared to magnetic confinement devices. It is hoped that such a device may one day be a net energy producer, but it has near term applications as a neutron generator. We study spherical IECs (SIECs), both theoretically and experimentally. Theoretically, we compute solutions in the free molecular limit and map out regions in control parameter space conducive to the formation of double potential wells. In addition, several other observables are mapped in the control parameter space. Such studies predict the threshold for the phenomena of "core splitting" to occur when the fractional well depth (FWD) is ˜70%-80%. With respect to double potential wells, it is shown that an optimal population of electrons exists for double well formation. In addition, double well depth is relatively insensitive to space charge spreading of ion beams. Glow discharge devices are studied experimentally with double and single Langmuir probes. The postulated micro-channeling phenomenon is verified with density measurements along a micro-channel and along the radius where micro-channels are absent. In addition, the measurements allow an evaluation of the neutrality of micro-channels and the heterogeneous structure of "Star Mode". It is shown that, despite visual evidence, micro-channeling persists well into "Jet" mode. In addition, the threshold for the "Star" mode to "Jet" mode transition is obtained experimentally. The studies have revealed new techniques for estimating tangential electric field components and studying the focusing of ion flow.

  19. OPACITY MEASUREMENT AND THEORETICAL INVESTIGATION OF HOT SILICON PLASMA

    SciTech Connect

    Xiong, Gang; Yang, Jiamin; Zhang, Jiyan; Hu, Zhimin; Zhao, Yang; Qing, Bo; Yang, Guohong; Wei, Minxi; Yi, Rongqing; Song, Tianming; Li, Hang; Yuan, Zheng; Lv, Min; Meng, Xujun; Xu, Yan; Wu, Zeqing; Yan, Jun E-mail: zhimin.hu@yahoo.com

    2016-01-01

    We report on opacity measurements of a silicon (Si) plasma at a temperature of (72 ± 5) eV and a density of (6.0 ± 1.2) mg cm{sup −3} in the photon energy range of 1790–1880 eV. A 23 μg cm{sup −2} Si foil tamped by 50 μg cm{sup −2} CH layers on each side was heated to a hot-dense plasma state by X-ray radiation emitted from a D-shaped gold cavity that was irradiated by intense lasers. Absorption lines of 1s − 2p transitions of Si xiii to Si ix ions have been measured using point-projection spectroscopy. The transmission spectrum of the silicon plasma was determined by comparing the light passing through the plasma to the light from the same shot passing by the plasma. The density of the Si plasma was determined experimentally by side-on radiography and the temperature was estimated from the radiation flux data. Radiative hydrodynamic simulations were performed to obtain the temporal evolutions of the density and temperature of the Si plasma. The experimentally obtained transmission spectra of the Si sample plasma have been reproduced using a detailed term account model with the local thermodynamic equilibrium approximation. The energy levels, oscillator strengths and photoionization cross-sections used in the calculation were generated by the flexible atomic code. The experimental transmission spectrum was compared with the theoretical calculation and good agreement was found. The present experimental spectrum and theoretical calculation were also compared with the new opacities available in the Los Alamos OPLIB database.

  20. Estimating ground motions using recorded accelerograms

    SciTech Connect

    Heaton, T.H.; Tajima, Fumiko ); Mori, A.W. )

    1986-03-01

    A procedure for estimating ground motions using recorded accelerograms is described. The premise of the study is the assumption that future ground motions will be similar to those observed for similar site and tectonic situations in the past. Direct techniques for scaling existing accelerograms have been developed, based on relative estimates of local magnitude, M{sub L}. Design events are described deterministically in terms of fault dimension, tectonic setting (stress drop), fault distance, and site conditions. A combination of empirical and theoretical arguments is used to develop relationships between M{sub L} and other earthquake magnitude scales. In order to minimize scaling errors due to lack of understanding of the physics of strong ground motion, the procedure employs as few intermediate scaling laws as possible. The procedure conserves a meaningful measure of the uncertainty inherent when predicting ground motions from simple parameterizations of earthquake sources and site conditions.