Hey, Hwee Weng Dennis; Lau, Eugene Tze-Chun; Lim, Joel-Louis; Choong, Denise Ai-Wen; Tan, Chuen-Seng; Liu, Gabriel Ka-Po; Wong, Hee-Kit
2017-03-01
Flexion radiographs have been used to identify cases of spinal instability. However, current methods are not standardized and are not sufficiently sensitive or specific to identify instability. This study aimed to introduce a new slump sitting method for performing lumbar spine flexion radiographs and comparison of the angular range of motions (ROMs) and displacements between the conventional method and this new method. This study used is a prospective study on radiological evaluation of the lumbar spine flexion ROMs and displacements using dynamic radiographs. Sixty patients were recruited from a single spine tertiary center. Angular and displacement measurements of lumbar spine flexion were carried out. Participants were randomly allocated into two groups: those who did the new method first, followed by the conventional method versus those who did the conventional method first, followed by the new method. A comparison of the angular and displacement measurements of lumbar spine flexion between the conventional method and the new method was performed and tested for superiority and non-inferiority. The measurements of global lumbar angular ROM were, on average, 17.3° larger (p<.0001) using the new slump sitting method compared with the conventional method. They were most significant at the levels of L3-L4, L4-L5, and L5-S1 (p<.0001, p<.0001 and p=.001, respectively). There was no significant difference between both methods when measuring lumbar displacements (p=.814). The new method of slump sitting dynamic radiograph was shown to be superior to the conventional method in measuring the angular ROM and non-inferior to the conventional method in the measurement of displacement. Copyright © 2016 Elsevier Inc. All rights reserved.
AlBarakati, SF; Kula, KS; Ghoneima, AA
2012-01-01
Objective The aim of this study was to assess the reliability and reproducibility of angular and linear measurements of conventional and digital cephalometric methods. Methods A total of 13 landmarks and 16 skeletal and dental parameters were defined and measured on pre-treatment cephalometric radiographs of 30 patients. The conventional and digital tracings and measurements were performed twice by the same examiner with a 6 week interval between measurements. The reliability within the method was determined using Pearson's correlation coefficient (r2). The reproducibility between methods was calculated by paired t-test. The level of statistical significance was set at p < 0.05. Results All measurements for each method were above 0.90 r2 (strong correlation) except maxillary length, which had a correlation of 0.82 for conventional tracing. Significant differences between the two methods were observed in most angular and linear measurements except for ANB angle (p = 0.5), angle of convexity (p = 0.09), anterior cranial base (p = 0.3) and the lower anterior facial height (p = 0.6). Conclusion In general, both methods of conventional and digital cephalometric analysis are highly reliable. Although the reproducibility of the two methods showed some statistically significant differences, most differences were not clinically significant. PMID:22184624
Bergin, Junping Ma; Rubenstein, Jeffrey E; Mancl, Lloyd; Brudvik, James S; Raigrodski, Ariel J
2013-10-01
Conventional impression techniques for recording the location and orientation of implant-supported, complete-arch prostheses are time consuming and prone to error. The direct optical recording of the location and orientation of implants, without the need for intermediate transfer steps, could reduce or eliminate those disadvantages. The objective of this study was to assess the feasibility of using a photogrammetric technique to record the location and orientation of multiple implants and to compare the results with those of a conventional complete-arch impression technique. A stone cast of an edentulous mandibular arch containing 5 implant analogs was fabricated to create a master model. The 3-dimensional (3D) spatial orientations of implant analogs on the master model were measured with a coordinate measuring machine (CMM) (control). Five definitive casts were made from the master model with a splinted impression technique. The positions of the implant analogs on the 5 casts were measured with a NobelProcera scanner (conventional method). Prototype optical targets were attached to the master model implant analogs, and 5 sets of images were recorded with a digital camera and a standardized image capture protocol. Dimensional data were imported into commercially available photogrammetry software (photogrammetric method). The precision and accuracy of the 2 methods were compared with a 2-sample t test (α=.05) and a 95% confidence interval. The location precision (standard error of measurement) for CMM was 3.9 µm (95% CI 2.7 to 7.1), for photogrammetry, 5.6 µm (95% CI 3.4 to 16.1), and for the conventional method, 17.2 µm (95% CI 10.3 to 49.4). The average measurement error was 26.2 µm (95% CI 15.9 to 36.6) for the conventional method and 28.8 µm (95% CI 24.8 to 32.9) for the photogrammetric method. The overall measurement accuracy was not significantly different when comparing the conventional to the photogrammetric method (mean difference = -2.6 µm, 95% CI -12.8 to 7.6). The precision of the photogrammetric method was similar to CMM, but lower for the conventional method as compared to CMM and the photogrammetric method. However, the overall measurement accuracy of the photogrammetric and conventional methods was similar. Copyright © 2013 The Editorial Council of the Journal of Prosthetic Dentistry. Published by Mosby, Inc. All rights reserved.
The Sine Method: An Alternative Height Measurement Technique
Don C. Bragg; Lee E. Frelich; Robert T. Leverett; Will Blozan; Dale J. Luthringer
2011-01-01
Height is one of the most important dimensions of trees, but few observers are fully aware of the consequences of the misapplication of conventional height measurement techniques. A new approach, the sine method, can improve height measurement by being less sensitive to the requirements of conventional techniques (similar triangles and the tangent method). We studied...
Yatake, Hidetoshi; Sawai, Yuka; Nishi, Toshio; Nakano, Yoshiaki; Nishimae, Ayaka; Katsuda, Toshizo; Yabunaka, Koichi; Takeda, Yoshihiro; Inaji, Hideo
2017-07-01
The objective of the study was to compare direct measurement with a conventional method for evaluation of clip placement in stereotactic vacuum-assisted breast biopsy (ST-VAB) and to evaluate the accuracy of clip placement using the direct method. Accuracy of clip placement was assessed by measuring the distance from a residual calcification of a targeted calcification clustered to a clip on a mammogram after ST-VAB. Distances in the craniocaudal (CC) and mediolateral oblique (MLO) views were measured in 28 subjects with mammograms recorded twice or more after ST-VAB. The difference in the distance between the first and second measurements was defined as the reproducibility and was compared with that from a conventional method using a mask system with overlap of transparent film on the mammogram. The 3D clip-to-calcification distance was measured using the direct method in 71 subjects. The reproducibility of the direct method was higher than that of the conventional method in CC and MLO views (P = 0.002, P < 0.001). The median 3D clip-to-calcification distance was 2.8 mm, with an interquartile range of 2.0-4.8 mm and a range of 1.1-36.3 mm. The direct method used in this study was more accurate than the conventional method, and gave a median 3D distance of 2.8 mm between the calcification and clip.
Solar energy microclimate as determined from satellite observations
NASA Technical Reports Server (NTRS)
Vonder Haar, T. H.; Ellis, J. S.
1975-01-01
A method is presented for determining solar insolation at the earth's surface using satellite broadband visible radiance and cloud imagery data, along with conventional in situ measurements. Conventional measurements are used to both tune satellite measurements and to develop empirical relationships between satellite observations and surface solar insolation. Cloudiness is the primary modulator of sunshine. The satellite measurements as applied in this method consider cloudiness both explicitly and implicitly in determining surface solar insolation at space scales smaller than the conventional pyranometer network.
Shrestha, Rojeet; Miura, Yusuke; Hirano, Ken-Ichi; Chen, Zhen; Okabe, Hiroaki; Chiba, Hitoshi; Hui, Shu-Ping
2018-01-01
Fatty acid (FA) profiling of milk has important applications in human health and nutrition. Conventional methods for the saponification and derivatization of FA are time-consuming and laborious. We aimed to develop a simple, rapid, and economical method for the determination of FA in milk. We applied a beneficial approach of microwave-assisted saponification (MAS) of milk fats and microwave-assisted derivatization (MAD) of FA to its hydrazides, integrated with HPLC-based analysis. The optimal conditions for MAS and MAD were determined. Microwave irradiation significantly reduced the sample preparation time from 80 min in the conventional method to less than 3 min. We used three internal standards for the measurement of short-, medium- and long-chain FA. The proposed method showed satisfactory analytical sensitivity, recovery and reproducibility. There was a significant correlation in the milk FA concentrations between the proposed and conventional methods. Being quick, economic, and convenient, the proposed method for the milk FA measurement can be substitute for the convention method.
Hans-Erik Andersen; Stephen E. Reutebuch; Robert J. McGaughey
2006-01-01
Tree height is an important variable in forest inventory programs but is typically time-consuming and costly to measure in the field using conventional techniques. Airborne light detection and ranging (LIDAR) provides individual tree height measurements that are highly correlated with field-derived measurements, but the imprecision of conventional field techniques does...
NASA Astrophysics Data System (ADS)
Baek, Tae Hyun
Photoelasticity is one of the most widely used whole-field optical methods for stress analysis. The technique of birefringent coatings, also called the method of photoelastic coatings, extends the classical procedures of model photoelasticity to the measurement of surface strains in opaque models made of any structural material. Photoelastic phase-shifting method can be used for the determination of the phase values of isochromatics and isoclinics. In this paper, photoelastic phase-shifting technique and conventional Babinet-Soleil compensation method were utilized to analyze a specimen with a triangular hole and a circular hole under bending. Photoelastic phase-shifting technique is whole-field measurement. On the other hand, conventional compensation method is point measurement. Three groups of results were obtained by phase-shifting method with reflective polariscope arrangement, conventional compensation method and FEM simulation, respectively. The results from the first two methods agree with each other relatively well considering experiment error. The advantage of photoelastic phase-shifting method is that it is possible to measure the stress distribution accurately close to the edge of holes.
NASA Technical Reports Server (NTRS)
Richards, W. Lance
1996-01-01
Significant strain-gage errors may exist in measurements acquired in transient-temperature environments if conventional correction methods are applied. As heating or cooling rates increase, temperature gradients between the strain-gage sensor and substrate surface increase proportionally. These temperature gradients introduce strain-measurement errors that are currently neglected in both conventional strain-correction theory and practice. Therefore, the conventional correction theory has been modified to account for these errors. A new experimental method has been developed to correct strain-gage measurements acquired in environments experiencing significant temperature transients. The new correction technique has been demonstrated through a series of tests in which strain measurements were acquired for temperature-rise rates ranging from 1 to greater than 100 degrees F/sec. Strain-gage data from these tests have been corrected with both the new and conventional methods and then compared with an analysis. Results show that, for temperature-rise rates greater than 10 degrees F/sec, the strain measurements corrected with the conventional technique produced strain errors that deviated from analysis by as much as 45 percent, whereas results corrected with the new technique were in good agreement with analytical results.
Center index method-an alternative for wear measurements with radiostereometry (RSA).
Dahl, Jon; Figved, Wender; Snorrason, Finnur; Nordsletten, Lars; Röhrl, Stephan M
2013-03-01
Radiostereometry (RSA) is considered to be the most precise and accurate method for wear-measurements in total hip replacement. Post-operative stereoradiographs has so far been necessary for wear measurement. Hence, the use of RSA has been limited to studies planned for RSA measurements. We compared a new RSA method for wear measurements that does not require previous radiographs with conventional RSA. Instead of comparing present stereoradiographs with post-operative ones, we developed a method for calculating the post-operative position of the center of the femoral head on the present examination and using this as the index measurement. We compared this alternative method to conventional RSA in 27 hips in an ongoing RSA study. We found a high degree of agreement between the methods for both mean proximal (1.19 mm vs. 1.14 mm) and mean 3D wear (1.52 mm vs. 1.44 mm) after 10 years. Intraclass correlation coefficients (ICC) were 0.958 and 0.955, respectively (p<0.001 for both ICCs). The results were also within the limits of agreement when plotted subject-by-subject in a Bland-Altman plot. Our alternative method for wear measurements with RSA offers comparable results to conventional RSA measurements. It allows precise wear measurements without previous radiological examinations. Copyright © 2012 Orthopaedic Research Society.
Strain gage measurement errors in the transient heating of structural components
NASA Technical Reports Server (NTRS)
Richards, W. Lance
1993-01-01
Significant strain-gage errors may exist in measurements acquired in transient thermal environments if conventional correction methods are applied. Conventional correction theory was modified and a new experimental method was developed to correct indicated strain data for errors created in radiant heating environments ranging from 0.6 C/sec (1 F/sec) to over 56 C/sec (100 F/sec). In some cases the new and conventional methods differed by as much as 30 percent. Experimental and analytical results were compared to demonstrate the new technique. For heating conditions greater than 6 C/sec (10 F/sec), the indicated strain data corrected with the developed technique compared much better to analysis than the same data corrected with the conventional technique.
Yu, Yang; Zhang, Fan; Gao, Ming-Xin; Li, Hai-Tao; Li, Jing-Xing; Song, Wei; Huang, Xin-Sheng; Gu, Cheng-Xiong
2013-01-01
OBJECTIVES Intraoperative transit time flow measurement (TTFM) is widely used to assess anastomotic quality in coronary artery bypass grafting (CABG). However, in sequential vein grafting, the flow characteristics collected by the conventional TTFM method are usually associated with total graft flow and might not accurately indicate the quality of every distal anastomosis in a sequential graft. The purpose of our study was to examine a new TTFM method that could assess the quality of each distal anastomosis in a sequential graft more reliably than the conventional TTFM approach. METHODS Two TTFM methods were tested in 84 patients who underwent sequential saphenous off-pump CABG in Beijing An Zhen Hospital between April and August 2012. In the conventional TTFM method, normal blood flow in the sequential graft was maintained during the measurement, and the flow probe was placed a few centimetres above the anastomosis to be evaluated. In the new method, blood flow in the sequential graft was temporarily reduced during the measurement by placing an atraumatic bulldog clamp at the graft a few centimetres distal to the anastomosis to be evaluated, while the position of the flow probe remained the same as in the conventional method. This new TTFM method was named the flow reduction TTFM. Graft flow parameters measured by both methods were compared. RESULTS Compared with the conventional TTFM, the flow reduction TTFM resulted in significantly lower mean graft blood flow (P < 0.05); in contrast, yielded significantly higher pulsatility index (P < 0.05). Diastolic filling was not significantly different between the two methods and was >50% in both cases. Interestingly, the flow reduction TTFM identified two defective middle distal anastomoses that the conventional TTFM failed to detect. Graft flows near the defective distal anastomoses were improved substantially after revision. CONCLUSIONS In this study, we found that temporary reduction of graft flow during TTFM seemed to enhance the sensitivity of TTFM to less-than-critical anastomotic defects in a sequential graft and to improve the overall accuracy of the intraoperative assessment of anastomotic quality in sequential vein grafting. PMID:24000314
Zheng, Dandan; Todor, Dorin A
2011-01-01
In real-time trans-rectal ultrasound (TRUS)-based high-dose-rate prostate brachytherapy, the accurate identification of needle-tip position is critical for treatment planning and delivery. Currently, needle-tip identification on ultrasound images can be subject to large uncertainty and errors because of ultrasound image quality and imaging artifacts. To address this problem, we developed a method based on physical measurements with simple and practical implementation to improve the accuracy and robustness of needle-tip identification. Our method uses measurements of the residual needle length and an off-line pre-established coordinate transformation factor, to calculate the needle-tip position on the TRUS images. The transformation factor was established through a one-time systematic set of measurements of the probe and template holder positions, applicable to all patients. To compare the accuracy and robustness of the proposed method and the conventional method (ultrasound detection), based on the gold-standard X-ray fluoroscopy, extensive measurements were conducted in water and gel phantoms. In water phantom, our method showed an average tip-detection accuracy of 0.7 mm compared with 1.6 mm of the conventional method. In gel phantom (more realistic and tissue-like), our method maintained its level of accuracy while the uncertainty of the conventional method was 3.4mm on average with maximum values of over 10mm because of imaging artifacts. A novel method based on simple physical measurements was developed to accurately detect the needle-tip position for TRUS-based high-dose-rate prostate brachytherapy. The method demonstrated much improved accuracy and robustness over the conventional method. Copyright © 2011 American Brachytherapy Society. Published by Elsevier Inc. All rights reserved.
Kaur, Ravinder; Dhakad, Megh Singh; Goyal, Ritu; Haque, Absarul; Mukhopadhyay, Gauranga
2016-01-01
Candida infection is a major cause of morbidity and mortality in immunocompromised patients; an accurate and early identification is a prerequisite need to be taken as an effective measure for the management of patients. The purpose of this study was to compare the conventional identification of Candida species with identification by Vitek-2 system and the antifungal susceptibility testing (AST) by broth microdilution method with Vitek-2 AST system. A total of 172 Candida isolates were subjected for identification by the conventional methods, Vitek-2 system, restriction fragment length polymorphism, and random amplified polymorphic DNA analysis. AST was carried out as per the Clinical and Laboratory Standards Institute M27-A3 document and by Vitek-2 system. Candida albicans (82.51%) was the most common Candida species followed by Candida tropicalis (6.29%), Candida krusei (4.89%), Candida parapsilosis (3.49%), and Candida glabrata (2.79%). With Vitek-2 system, of the 172 isolates, 155 Candida isolates were correctly identified, 13 were misidentified, and four were with low discrimination. Whereas with conventional methods, 171 Candida isolates were correctly identified and only a single isolate of C. albicans was misidentified as C. tropicalis . The average measurement of agreement between the Vitek-2 system and conventional methods was >94%. Most of the isolates were susceptible to fluconazole (88.95%) and amphotericin B (97.67%). The measurement of agreement between the methods of AST was >94% for fluconazole and >99% for amphotericin B, which was statistically significant ( P < 0.01). The study confirmed the importance and reliability of conventional and molecular methods, and the acceptable agreements suggest Vitek-2 system an alternative method for speciation and sensitivity testing of Candida species infections.
Thurston, Rebecca C; Hernandez, Javier; Del Rio, Jose M; De La Torre, Fernando
2011-07-01
Most midlife women have hot flashes. The conventional criterion (≥2 μmho rise/30 s) for classifying hot flashes physiologically has shown poor performance. We improved this performance in the laboratory with Support Vector Machines (SVMs), a pattern classification method. We aimed to compare conventional to SVM methods to classify hot flashes in the ambulatory setting. Thirty-one women with hot flashes underwent 24 h of ambulatory sternal skin conductance monitoring. Hot flashes were quantified with conventional (≥2 μmho/30 s) and SVM methods. Conventional methods had low sensitivity (sensitivity=.57, specificity=.98, positive predictive value (PPV)=.91, negative predictive value (NPV)=.90, F1=.60), with performance lower with higher body mass index (BMI). SVMs improved this performance (sensitivity=.87, specificity=.97, PPV=.90, NPV=.96, F1=.88) and reduced BMI variation. SVMs can improve ambulatory physiologic hot flash measures. Copyright © 2010 Society for Psychophysiological Research.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hohimer, J.P.
The use of laser-based analytical methods in nuclear-fuel processing plants is considered. The species and locations for accountability, process control, and effluent control measurements in the Coprocessing, Thorex, and reference Purex fuel processing operations are identified and the conventional analytical methods used for these measurements are summarized. The laser analytical methods based upon Raman, absorption, fluorescence, and nonlinear spectroscopy are reviewed and evaluated for their use in fuel processing plants. After a comparison of the capabilities of the laser-based and conventional analytical methods, the promising areas of application of the laser-based methods in fuel processing plants are identified.
Lima, Ana Paula Barbosa; Vitti, Rafael Pino; Amaral, Marina; Neves, Ana Christina Claro; da Silva Concilio, Lais Regiane
2018-04-01
This study evaluated the dimensional stability of a complete-arch prosthesis processed by conventional method in water bath or microwave energy and polymerized by two different curing cycles. Forty maxillary complete-arch prostheses were randomly divided into four groups (n = 10): MW1 - acrylic resin cured by one microwave cycle; MW2 - acrylic resin cured by two microwave cycles: WB1 - conventional acrylic resin polymerized using one curing cycle in a water bath; WB2 - conventional acrylic resin polymerized using two curing cycles in a water bath. For evaluation of dimensional stability, occlusal vertical dimension (OVD) and area of contact points were measured in two different measurement times: before and after the polymerization method. A digital caliper was used for OVD measurement. Occlusal contact registration strips were used between maxillary and mandibular dentures to measure the contact points. The images were measured using the software IpWin32, and the differences before and after the polymerization methods were calculated. The data were statistically analyzed using the one-way ANOVA and Tukey test (α = .05). he results demonstrated significant statistical differences for OVD between different measurement times for all groups. MW1 presented the highest OVD values, while WB2 had the lowest OVD values ( P <.05). No statistical differences were found for area of contact points among the groups ( P =.7150). The conventional acrylic resin polymerized using two curing cycles in a water bath led to less difference in OVD of complete-arch prosthesis.
Continuous Blood Pressure Monitoring in Daily Life
NASA Astrophysics Data System (ADS)
Lopez, Guillaume; Shuzo, Masaki; Ushida, Hiroyuki; Hidaka, Keita; Yanagimoto, Shintaro; Imai, Yasushi; Kosaka, Akio; Delaunay, Jean-Jacques; Yamada, Ichiro
Continuous monitoring of blood pressure in daily life could improve early detection of cardiovascular disorders, as well as promoting healthcare. Conventional ambulatory blood pressure monitoring (ABPM) equipment can measure blood pressure at regular intervals for 24 hours, but is limited by long measuring time, low sampling rate, and constrained measuring posture. In this paper, we demonstrate a new method for continuous real-time measurement of blood pressure during daily activities. Our method is based on blood pressure estimation from pulse wave velocity (PWV) calculation, which formula we improved to take into account changes in the inner diameter of blood vessels. Blood pressure estimation results using our new method showed a greater precision of measured data during exercise, and a better accuracy than the conventional PWV method.
Simpson, Michael R.; Oltmann, Richard N.
1993-01-01
Discharge measurement of large rivers and estuaries is difficult, time consuming, and sometimes dangerous. Frequently, discharge measurements cannot be made in tide-affected rivers and estuaries using conventional discharge-measurement techniques because of dynamic discharge conditions. The acoustic Doppler discharge-measurement system (ADDMS) was developed by the U.S. Geological Survey using a vessel-mounted acoustic Doppler current profiler coupled with specialized computer software to measure horizontal water velocity at 1-meter vertical intervals in the water column. The system computes discharge from water-and vessel-velocity data supplied by the ADDMS using vector-algebra algorithms included in the discharge-measurement software. With this system, a discharge measurement can be obtained by engaging the computer software and traversing a river or estuary from bank to bank; discharge in parts of the river or estuarine cross sections that cannot be measured because of ADDMS depth limitations are estimated by the system. Comparisons of ADDMS-measured discharges with ultrasonic-velocity-meter-measured discharges, along with error-analysis data, have confirmed that discharges provided by the ADDMS are at least as accurate as those produced using conventional methods. In addition, the advantage of a much shorter measurement time (2 minutes using the ADDMS compared with 1 hour or longer using conventional methods) has enabled use of the ADDMS for several applications where conventional discharge methods could not have been used with the required accuracy because of dynamic discharge conditions.
NASA Astrophysics Data System (ADS)
Sugimoto, Masataka; Hasegawa, Hideyuki; Kanai, Hiroshi
2005-08-01
Endothelial dysfunction is considered to be an initial step of arteriosclerosis [R. Ross: N. Engl. J. Med. 340 (2004) 115]. For the assessment of the endothelium function, brachial artery flow-mediated dilation (FMD) caused by increased blood flow has been evaluated with ultrasonic diagnostic equipment. In the case of conventional methods, the change in artery diameter caused by FMD is measured [M. Hashimoto et al.: Circulation 92 (1995) 3431]. Although the arterial wall has a layered structure (intima, media, and adventitia), such a structure is not taken into account in conventional methods because the change in diameter depends on the characteristic of the entire wall. However, smooth muscle present only in the media contributes to FMD, whereas the collagen-rich hard adventitia does not contribute. In this study, we measure the change in elasticity of only the intima-media region including smooth muscle using the phased tracking method [H. Kanai et al.: IEEE Trans. Ultrason. Ferroelectr. Freq. Control 43 (1996) 791]. From the change in elasticity, FMD measured only for the intima-media region by our proposed method was found to be more sensitive than that measured for the entire wall by the conventional method.
Jo, Ayami; Kanazawa, Manabu; Sato, Yusuke; Iwaki, Maiko; Akiba, Norihisa; Minakuchi, Shunsuke
2015-08-01
To compare the effect of conventional complete dentures (CD) fabricated using two different impression methods on patient-reported outcomes in a randomized controlled trial (RCT). A cross-over RCT was performed with edentulous patients, required maxillomandibular CDs. Mandibular CDs were fabricated using two different methods. The conventional method used a custom tray border moulded with impression compound and a silicone. The simplified used a stock tray and an alginate. Participants were randomly divided into two groups. The C-S group had the conventional method used first, followed by the simplified. The S-C group was in the reverse order. Adjustment was performed four times. A wash out period was set for 1 month. The primary outcome was general patient satisfaction, measured using visual analogue scales, and the secondary outcome was oral health-related quality of life, measured using the Japanese version of the Oral Health Impact Profile for edentulous (OHIP-EDENT-J) questionnaire scores. Twenty-four participants completed the trial. With regard to general patient satisfaction, the conventional method was significantly more acceptable than the simplified. No significant differences were observed between the two methods in the OHIP-EDENT-J scores. This study showed CDs fabricated with a conventional method were significantly more highly rated for general patient satisfaction than a simplified. CDs, fabricated with the conventional method that included a preliminary impression made using alginate in a stock tray and subsequently a final impression made using silicone in a border moulded custom tray resulted in higher general patient satisfaction. UMIN000009875. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Faizah Bawadi, Nor; Anuar, Shamilah; Rahim, Mustaqqim A.; Mansor, A. Faizal
2018-03-01
A conventional and seismic method for determining the ultimate pile bearing capacity was proposed and compared. The Spectral Analysis of Surface Wave (SASW) method is one of the non-destructive seismic techniques that do not require drilling and sampling of soils, was used in the determination of shear wave velocity (Vs) and damping (D) profile of soil. The soil strength was found to be directly proportional to the Vs and its value has been successfully applied to obtain shallow bearing capacity empirically. A method is proposed in this study to determine the pile bearing capacity using Vs and D measurements for the design of pile and also as an alternative method to verify the bearing capacity from the other conventional methods of evaluation. The objectives of this study are to determine Vs and D profile through frequency response data from SASW measurements and to compare pile bearing capacities obtained from the method carried out and conventional methods. All SASW test arrays were conducted near the borehole and location of conventional pile load tests. In obtaining skin and end bearing pile resistance, the Hardin and Drnevich equation has been used with reference strains obtained from the method proposed by Abbiss. Back analysis results of pile bearing capacities from SASW were found to be 18981 kN and 4947 kN compared to 18014 kN and 4633 kN of IPLT with differences of 5% and 6% for Damansara and Kuala Lumpur test sites, respectively. The results of this study indicate that the seismic method proposed in this study has the potential to be used in estimating the pile bearing capacity.
Discharge measurements using a broad-band acoustic Doppler current profiler
Simpson, Michael R.
2002-01-01
The measurement of unsteady or tidally affected flow has been a problem faced by hydrologists for many years. Dynamic discharge conditions impose an unreasonably short time constraint on conventional current-meter discharge-measurement methods, which typically last a minimum of 1 hour. Tidally affected discharge can change more than 100 percent during a 10-minute period. Over the years, the U.S. Geological Survey (USGS) has developed moving-boat discharge-measurement techniques that are much faster but less accurate than conventional methods. For a bibliography of conventional moving-boat publications, see Simpson and Oltmann (1993, page 17). The advent of the acoustic Doppler current profiler (ADCP) made possible the development of a discharge-measurement system capable of more accurately measuring unsteady or tidally affected flow. In most cases, an ADCP discharge-measurement system is dramatically faster than conventional discharge-measurement systems, and has comparable or better accuracy. In many cases, an ADCP discharge-measurement system is the only choice for use at a particular measurement site. ADCP systems are not yet ?turnkey;? they are still under development, and for proper operation, require a significant amount of operator training. Not only must the operator have a rudimentary knowledge of acoustic physics, but also a working knowledge of ADCP operation, the manufacturer's discharge-measurement software, and boating techniques and safety.
Noncontact Measurement of Humidity and Temperature Using Airborne Ultrasound
NASA Astrophysics Data System (ADS)
Kon, Akihiko; Mizutani, Koichi; Wakatsuki, Naoto
2010-04-01
We describe a noncontact method for measuring humidity and dry-bulb temperature. Conventional humidity sensors are single-point measurement devices, so that a noncontact method for measuring the relative humidity is required. Ultrasonic temperature sensors are noncontact measurement sensors. Because water vapor in the air increases sound velocity, conventional ultrasonic temperature sensors measure virtual temperature, which is higher than dry-bulb temperature. We performed experiments using an ultrasonic delay line, an atmospheric pressure sensor, and either a thermometer or a relative humidity sensor to confirm the validity of our measurement method at relative humidities of 30, 50, 75, and 100% and at temperatures of 283.15, 293.15, 308.15, and 323.15 K. The results show that the proposed method measures relative humidity with an error rate of less than 16.4% and dry-bulb temperature with an error of less than 0.7 K. Adaptations of the measurement method for use in air-conditioning control systems are discussed.
Code of Federal Regulations, 2013 CFR
2013-01-01
..., set the clock time to 3:23 and use the average power approach described in Section 5, Paragraph 5.3.2... conventional ranges, conventional cooking tops, conventional ovens, and microwave ovens at this time. However... finite period of time after the end of the heating function, where the end of the heating function is...
Reproducibility of techniques using Archimedes' principle in measuring cancellous bone volume.
Zou, L; Bloebaum, R D; Bachus, K N
1997-01-01
Researchers have been interested in developing techniques to accurately and reproducibly measure the volume fraction of cancellous bone. Historically bone researchers have used Archimedes' principle with water to measure the volume fraction of cancellous bone. Preliminary results in our lab suggested that the calibrated water technique did not provide reproducible results. Because of this difficulty, it was decided to compare the conventional water method to a water with surfactant and a helium method using a micropycnometer. The water/surfactant and the helium methods were attempts to improve the fluid penetration into the small voids present in the cancellous bone structure. In order to compare the reproducibility of the new methods with the conventional water method, 16 cancellous bone specimens were obtained from femoral condyles of human and greyhound dog femora. The volume fraction measurements on each specimen were repeated three times with all three techniques. The results showed that the helium displacement method was more than an order of magnitudes more reproducible than the two other water methods (p < 0.05). Statistical analysis also showed that the conventional water method produced the lowest reproducibility (p < 0.05). The data from this study indicate that the helium displacement technique is a very useful, rapid and reproducible tool for quantitatively characterizing anisotropic porous tissue structures such as cancellous bone.
Houshmand, Behzad; Janbakhsh, Noushin; Khalilian, Fatemeh; Talebi Ardakani, Mohammad Reza
2017-01-01
Introduction: Diode laser irradiation has recently shown promising results for treatment of gingival pigmentation. This study sought to compare the efficacy of 2 diode laser irradiation protocols for treatment of gingival pigmentations, namely the conventional method and the sieve method. Methods: In this split-mouth clinical trial, 15 patients with gingival pigmentation were selected and their pigmentation intensity was determined using Dummett's oral pigmentation index (DOPI) in different dental regions. Diode laser (980 nm wavelength and 2 W power) was irradiated through a stipple pattern (sieve method) and conventionally in the other side of the mouth. Level of pain and satisfaction with the outcome (both patient and periodontist) were measured using a 0-10 visual analog scale (VAS) for both methods. Patients were followed up at 2 weeks, one month and 3 months. Pigmentation levels were compared using repeated measures of analysis of variance (ANOVA). The difference in level of pain and satisfaction between the 2 groups was analyzed by sample t test and general estimate equation model. Results: No significant differences were found regarding the reduction of pigmentation scores and pain and scores between the 2 groups. The difference in satisfaction with the results at the three time points was significant in both conventional and sieve methods in patients ( P = 0.001) and periodontists ( P = 0.015). Conclusion: Diode laser irradiation in both methods successfully eliminated gingival pigmentations. The sieve method was comparable to conventional technique, offering no additional advantage.
A Rapid Method for Measuring Strontium-90 Activity in Crops in China
NASA Astrophysics Data System (ADS)
Pan, Lingjing Pan; Yu, Guobing; Wen, Deyun; Chen, Zhi; Sheng, Liusi; Liu, Chung-King; Xu, X. George
2017-09-01
A rapid method for measuring Sr-90 activity in crop ashes is presented. Liquid scintillation counting, combined with ion exchange columns 4`, 4"(5")-di-t-butylcyclohexane-18-crown-6, is used to determine the activity of Sr-90 in crops. The yields of chemical procedure are quantified using gravimetric analysis. The conventional method that uses ion-exchange resin with HDEHP could not completely remove all the bismuth when comparatively large lead and bismuth exist in the samples. This is overcome by the rapid method. The chemical yield of this method is about 60% and the MDA for Sr-90 is found to be 2:32 Bq/kg. The whole procedure together with using spectrum analysis to determine the activity only takes about one day, which is really a large improvement compared with the conventional method. A modified conventional method is also described here to verify the value of the rapid one. These two methods can meet di_erent needs of daily monitoring and emergency situation.
Agrawal, Yuvraj; Desai, Aravind; Mehta, Jaysheel
2011-12-01
We aimed to quantify the severity of the hallux valgus based on the lateral sesamoid position and to establish a correlation of our simple assessment method with the conventional radiological assessments. We reviewed one hundred and twenty two dorso-plantar weight bearing radiographs of feet. The intermetatarsal and hallux valgus angles were measured by the conventional methods; and the position of lateral sesamoid in relation to first metatarsal neck was assessed by our new and simple method. Significant correlation was noted between intermetatarsal angle and lateral sesamoid position (Rho 0.74, p < 0.0001); lateral sesamoid position and hallux valgus angle (Rho 0.56, p < 0.0001). Similar trends were noted in different grades of severity of hallux valgus in all the three methods of assessment. Our method of assessing hallux valgus deformity based on the lateral sesamoid position is simple, less time consuming and has statistically significant correlation with that of the established conventional radiological measurements. Copyright © 2011 European Foot and Ankle Society. Published by Elsevier Ltd. All rights reserved.
Chikushi, Hiroaki; Fujii, Yuka; Toda, Kei
2012-09-21
In this work, a method for measuring polychlorinated biphenyls (PCBs) in contaminated solid waste was investigated. This waste includes paper that is used in electric transformers to insulate electric components. The PCBs in paper sample were extracted by supercritical fluid extraction and analyzed by gas chromatography-electron capture detection. The recoveries with this method (84-101%) were much higher than those with conventional water extraction (0.08-14%), and were comparable to those with conventional organic solvent extraction. Limit of detection was 0.0074 mg kg(-1) and measurable up to 2.5 mg kg(-1) for 0.5 g of paper sample. Data for real insulation paper by the proposed method agreed well with those by the conventional organic solvent extraction. Extraction from wood and concrete was also investigated and good performance was obtained as well as for paper samples. The supercritical fluid extraction is simpler, faster, and greener than conventional organic solvent extraction. Copyright © 2012 Elsevier B.V. All rights reserved.
Singh, Sunint; Palaskar, Jayant N.; Mittal, Sanjeev
2013-01-01
Background: Conventional heat cure poly methyl methacrylate (PMMA) is the most commonly used denture base resin despite having some short comings. Lengthy polymerization time being one of them and in order to overcome this fact microwave curing method was recommended. Unavailability of specially designed microwavable acrylic resin made it unpopular. Therefore, in this study, conventional heat cure PMMA was polymerized by microwave energy. Aim and Objectives: This study was designed to evaluate the surface porosities in PMMA cured by conventional water bath and microwave energy and compare it with microwavable acrylic resin cured by microwave energy. Materials and Methods: Wax samples were obtained by pouring molten wax into a metal mold of 25 mm × 12 mm × 3 mm dimensions. These samples were divided into three groups namely C, CM, and M. Group C denotes conventional heat cure PMMA cured by water bath method, CM denotes conventional heat cure PMMA cured by microwave energy, M denotes specially designed microwavable acrylic denture base resin cured by microwave energy. After polymerization, each sample was scanned in three pre-marked areas for surface porosities using the optical microscope. As per the literature available, this instrument is being used for the first time to measure the porosity in acrylic resin. It is a reliable method of measuring area of surface pores. Portion of the sample being scanned is displayed on the computer and with the help of software area of each pore was measured and data were analyzed. Results: Conventional heat cure PMMA samples cured by microwave energy showed maximum porosities than the samples cured by conventional water bath method and microwavable acrylic resin cured by microwave energy. Higher percentage of porosities was statistically significant, but well within the range to be clinically acceptable. Conclusion: Within the limitations of this in-vitro study, conventional heat cure PMMA can be cured by microwave energy without compromising on its property such as surface porosity. PMID:24015000
Fibre Optic Sensors for Selected Wastewater Characteristics
Chong, Su Sin; Abdul Aziz, A. R.; Harun, Sulaiman W.
2013-01-01
Demand for online and real-time measurements techniques to meet environmental regulation and treatment compliance are increasing. However the conventional techniques, which involve scheduled sampling and chemical analysis can be expensive and time consuming. Therefore cheaper and faster alternatives to monitor wastewater characteristics are required as alternatives to conventional methods. This paper reviews existing conventional techniques and optical and fibre optic sensors to determine selected wastewater characteristics which are colour, Chemical Oxygen Demand (COD) and Biological Oxygen Demand (BOD). The review confirms that with appropriate configuration, calibration and fibre features the parameters can be determined with accuracy comparable to conventional method. With more research in this area, the potential for using FOS for online and real-time measurement of more wastewater parameters for various types of industrial effluent are promising. PMID:23881131
Neck pain assessment in a virtual environment.
Sarig-Bahat, Hilla; Weiss, Patrice L Tamar; Laufer, Yocheved
2010-02-15
Neck-pain and control group comparative analysis of conventional and virtual reality (VR)-based assessment of cervical range of motion (CROM). To use a tracker-based VR system to compare CROM of individuals suffering from chronic neck pain with CROM of asymptomatic individuals; to compare VR system results with those obtained during conventional assessment; to present the diagnostic value of CROM measures obtained by both assessments; and to demonstrate the effect of a single VR session on CROM. Neck pain is a common musculoskeletal complaint with a reported annual prevalence of 30% to 50%. In the absence of a gold standard for CROM assessment, a variety of assessment devices and methodologies exist. Common to these methodologies, assessment of CROM is carried out by instructing subjects to move their head as far as possible. However, these elicited movements do not necessarily replicate functional movements which occur spontaneously in response to multiple stimuli. To achieve a more functional approach to cervical motion assessment, we have recently developed a VR environment in which electromagnetic tracking is used to monitor cervical motion while participants are involved in a simple yet engaging gaming scenario. CROM measures were collected from 25 symptomatic and 42 asymptomatic individuals using VR and conventional assessments. Analysis of variance was used to determine differences between groups and assessment methods. Logistic regression analysis, using a single predictor, compared the diagnostic ability of both methods. Results obtained by both methods demonstrated significant CROM limitations in the symptomatic group. The VR measures showed greater CROM and sensitivity while conventional measures showed greater specificity. A single session exposure to VR resulted in a significant increase in CROM. Neck pain is significantly associated with reduced CROM as demonstrated by both VR and conventional assessment methods. The VR method provides assessment of functional CROM and can be used for CROM enhancement. Assessment by VR has greater sensitivity than conventional assessment and can be used for the detection of true symptomatic individuals.
A holistic calibration method with iterative distortion compensation for stereo deflectometry
NASA Astrophysics Data System (ADS)
Xu, Yongjia; Gao, Feng; Zhang, Zonghua; Jiang, Xiangqian
2018-07-01
This paper presents a novel holistic calibration method for stereo deflectometry system to improve the system measurement accuracy. The reconstruction result of stereo deflectometry is integrated with the calculated normal data of the measured surface. The calculation accuracy of the normal data is seriously influenced by the calibration accuracy of the geometrical relationship of the stereo deflectometry system. Conventional calibration approaches introduce form error to the system due to inaccurate imaging model and distortion elimination. The proposed calibration method compensates system distortion based on an iterative algorithm instead of the conventional distortion mathematical model. The initial value of the system parameters are calculated from the fringe patterns displayed on the systemic LCD screen through a reflection of a markless flat mirror. An iterative algorithm is proposed to compensate system distortion and optimize camera imaging parameters and system geometrical relation parameters based on a cost function. Both simulation work and experimental results show the proposed calibration method can significantly improve the calibration and measurement accuracy of a stereo deflectometry. The PV (peak value) of measurement error of a flat mirror can be reduced to 69.7 nm by applying the proposed method from 282 nm obtained with the conventional calibration approach.
Bobo-García, Gloria; Davidov-Pardo, Gabriel; Arroqui, Cristina; Vírseda, Paloma; Marín-Arroyo, María R; Navarro, Montserrat
2015-01-01
Total phenolic content (TPC) and antioxidant activity (AA) assays in microplates save resources and time, therefore they can be useful to overcome the fact that the conventional methods are time-consuming, labour intensive and use large amounts of reagents. An intra-laboratory validation of the Folin-Ciocalteu microplate method to measure TPC and the 2,2-diphenyl-1-picrylhydrazyl (DPPH) microplate method to measure AA was performed and compared with conventional spectrophotometric methods. To compare the TPC methods, the confidence intervals of a linear regression were used. In the range of 10-70 mg L(-1) of gallic acid equivalents (GAE), both methods were equivalent. To compare the AA methodologies, the F-test and t-test were used in a range from 220 to 320 µmol L(-1) of Trolox equivalents. Both methods had homogeneous variances, and the means were not significantively different. The limits of detection and quantification for the TPC microplate method were 0.74 and 2.24 mg L(-1) GAE and for the DPPH 12.07 and 36.58 µmol L(-1) of Trolox equivalents. The relative standard deviation of the repeatability and reproducibility for both microplate methods were ≤ 6.1%. The accuracy ranged from 88% to 100%. The microplate and the conventional methods are equals in a 95% confidence level. © 2014 Society of Chemical Industry.
Ahn, Tae-Jung; Jung, Yongmin; Oh, Kyunghwan; Kim, Dug Young
2005-12-12
We propose a new chromatic dispersion measurement method for the higher-order modes of an optical fiber using optical frequency modulated continuous-wave (FMCW) interferometry. An optical fiber which supports few excited modes was prepared for our experiments. Three different guiding modes of the fiber were identified by using far-field spatial beam profile measurements and confirmed with numerical mode analysis. By using the principle of a conventional FMWC interferometry with a tunable external cavity laser, we have demonstrated that the chromatic dispersion of a few-mode optical fiber can be obtained directly and quantitatively as well as qualitatively. We have also compared our measurement results with those of conventional modulation phase-shift method.
GESFIDE-PROPELLER Approach for Simultaneous R2 and R2* Measurements in the Abdomen
Jin, Ning; Guo, Yang; Zhang, Zhuoli; Zhang, Longjiang; Lu, Guangming; Larson, Andrew C.
2013-01-01
Purpose To investigate the feasibility of combining GESFIDE with PROPELLER sampling approaches for simultaneous abdominal R2 and R2* mapping. Materials and Methods R2 and R2* measurements were performed in 9 healthy volunteers and phantoms using the GESFIDE-PROPELLER and the conventional Cartesian-sampling GESFIDE approaches. Results Images acquired with the GESFIDE-PROPELLER sequence effectively mitigated the respiratory motion artifacts, which were clearly evident in the images acquired using the conventional GESFIDE approach. There were no significant difference between GESFIDE-PROPELLER and reference MGRE R2* measurements (p = 0.162) whereas the Cartesian-sampling based GESFIDE methods significantly overestimated R2* values compared to MGRE measurements (p < 0.001). Conclusion The GESFIDE-PROPELLER sequence provided high quality images and accurate abdominal R2 and R2* maps while avoiding the motion artifacts common to the conventional Cartesian-sampling GESFIDE approaches. PMID:24041478
DOT National Transportation Integrated Search
1977-07-01
The workshop focused on current methods of assessing the effectiveness of crime and vandalism reduction methods that are used in conventional urban mass transit systems, and on how they might be applied to new AGT systems. Conventional as well as nov...
Confocal laser induced fluorescence with comparable spatial localization to the conventional method
NASA Astrophysics Data System (ADS)
Thompson, Derek S.; Henriquez, Miguel F.; Scime, Earl E.; Good, Timothy N.
2017-10-01
We present measurements of ion velocity distributions obtained by laser induced fluorescence (LIF) using a single viewport in an argon plasma. A patent pending design, which we refer to as the confocal fluorescence telescope, combines large objective lenses with a large central obscuration and a spatial filter to achieve high spatial localization along the laser injection direction. Models of the injection and collection optics of the two assemblies are used to provide a theoretical estimate of the spatial localization of the confocal arrangement, which is taken to be the full width at half maximum of the spatial optical response. The new design achieves approximately 1.4 mm localization at a focal length of 148.7 mm, improving on previously published designs by an order of magnitude and approaching the localization achieved by the conventional method. The confocal method, however, does so without requiring a pair of separated, perpendicular optical paths. The confocal technique therefore eases the two window access requirement of the conventional method, extending the application of LIF to experiments where conventional LIF measurements have been impossible or difficult, or where multiple viewports are scarce.
Code of Federal Regulations, 2012 CFR
2012-01-01
... standby mode, set the clock time to 3:23 and use the average power approach described in Section 5... ranges, conventional cooking tops, conventional ovens, and microwave ovens at this time. However, any... mode may persist for an indefinite time. An indicator that only shows the user that the product is in...
Israel, R G; Evans, P; Pories, W J; O'Brien, K F; Donnelly, J E
1990-01-01
This study compared two methods of hydrostatic weighing without head submersion to conventional hydrostatic weighting in morbidly obese females. We concluded that hydrostatic weighing without head submersion is a valid alternative to conventional hydrostatic weighing especially when subjects are apprehensive in the water. The use of anthropometric head measures (HWNS-A) did not significantly improve the accuracy of the body composition assessment; therefore, elimination of these time consuming measurements in favor of the direct correction of head above Db is recommended.
Ichise, Yasuyuki; Horiuchi, Akira; Nakayama, Yoshiko; Tanaka, Naoki
2011-01-01
The ideal method to remove small colorectal polyps is unknown. We compared removal by colon snare transection without electrocautery (cold snare polypectomy) with conventional electrocautery snare polypectomy (hot polypectomy) in terms of procedure duration, difficulty in retrieving polyps, bleeding, and post-polypectomy symptoms. Patients with colorectal polyps up to 8 mm in diameter were randomized to polypectomy by cold snare technique (cold group) or conventional polypectomy (conventional group). The principal outcome measures were abdominal symptoms within 2 weeks after polypectomy. Secondary outcome measures were the rates of retrieval of colorectal polyps and bleeding. Eighty patients were randomized: cold group, n = 40 (101 polyps) and conventional group, n = 40 (104 polyps). The patients' demographic characteristics and the number and size of polyps removed were similar between the two techniques. Procedure time was significantly shorter with cold polypectomy vs. conventional polypectomy (18 vs. 25 min, p < 0.0001). Complete polyp retrieval rates were identical [96% (97/101) vs. 96% (100/104)]. No bleeding requiring hemostasis occurred in either group. Abdominal symptoms shortly after polypectomy were more common with conventional polypectomy (i.e. 20%; 8/40) than with cold polypectomy (i.e. 2.5%; 1/40; p = 0.029). Cold polypectomy was superior to conventional polypectomy in terms of procedure time and post-polypectomy abdominal symptoms. The two methods were otherwise essentially identical in terms of bleeding risk and complete polyp retrieval. Cold polypectomy is therefore the preferred method for removal of small colorectal polyps. Copyright © 2011 S. Karger AG, Basel.
NASA Astrophysics Data System (ADS)
Lee, Kang Il
2015-01-01
A new method for measuring the normalized broadband ultrasound attenuation (nBUA) in trabecular bone by using a bidirectional transverse transmission technique was proposed and validated with measurements obtained by using the conventional transverse transmission technique. There was no significant difference between the nBUA measurements obtained for 14 bovine femoral trabecular bone samples by using the bidirectional and the conventional transverse transmission techniques. The nBUA measured by using the two transverse transmission techniques showed strong positive correlations of r = 0.87 to 0.88 with the apparent bone density, consistent with the behavior in human trabecular bone invitro. We expect that the new method can be usefully applied for improved accuracy and precision in clinical measurements.
Code of Federal Regulations, 2011 CFR
2011-01-01
...) (density of mercury equals 13.595 grams per cubic centimeter). 1.9Thermocouple means a device consisting of... Cleaning Operations per Year T—Temperature t—Time V—Volume of Gas Consumed W—Weight of Test Block 2. Test.... 2.9.2Gas Measurements. 2.9.2.1Positive displacement meters. The gas meter to be used for measuring...
NASA Astrophysics Data System (ADS)
Jafari, M.; Cao, S. C.; Jung, J.
2017-12-01
Goelogical CO2 sequestration (GCS) has been recently introduced as an effective method to mitigate carbon dioxide emission. CO2 from main producer sources is collected and then is injected underground formations layers to be stored for thousands to millions years. A safe and economical storage project depends on having an insight of trapping mechanisms, fluids dynamics, and interaction of fluids-rocks. Among different forces governing fluids mobility and distribution in GCS condition, capillary pressure is of importance, which, in turn, wettability (measured by contact angel (CA)) is the most controversial parameters affecting it. To explore the sources of discrepancy in the literature for CA measurement, we conducted a series of conventional captive bubble test on glass plates under high pressure condition. By introducing a shape factor, we concluded that surface imperfection can distort the results in such tests. Since the conventional methods of measuring the CA is affected by gravity and scale effect, we introduced a different technique to measure pore-scale CA inside a transparent glass microchip. Our method has the ability to consider pore sizes and simulate static and dynamics CA during dewetting and imbibition. Glass plates shows a water-wet behavior (CA 30° - 45°) by a conventional experiment consistent with literature. However, CA of miniature bubbles inside of the micromodel can have a weaker water-wet behavior (CA 55° - 69°). In a more realistic pore-scale condition, water- CO2 interface covers whole width of a pore throats. Under this condition, the receding CA, which is used for injectability and capillary breakthrough pressure, increases with decreasing pores size. On the other hand, advancing CA, which is important for residual or capillary trapping, does not show a correlation with throat sizes. Static CA measured in the pores during dewetting is lower than static CA on flat plate, but it is much higher when measured during imbibition implying weaker water-wet behavior. Pore-scale CA, which realistically represents rocks wettability behavior, shows weaker water-wet behavior than conventional measurement methods, which must be considered for safety of geological storage.
NASA Astrophysics Data System (ADS)
Iwaki, Sunao; Ueno, Shoogo
1998-06-01
The weighted minimum-norm estimation (wMNE) is a popular method to obtain the source distribution in the human brain from magneto- and electro- encephalograpic measurements when detailed information about the generator profile is not available. We propose a method to reconstruct current distributions in the human brain based on the wMNE technique with the weighting factors defined by a simplified multiple signal classification (MUSIC) prescanning. In this method, in addition to the conventional depth normalization technique, weighting factors of the wMNE were determined by the cost values previously calculated by a simplified MUSIC scanning which contains the temporal information of the measured data. We performed computer simulations of this method and compared it with the conventional wMNE method. The results show that the proposed method is effective for the reconstruction of the current distributions from noisy data.
Suazo, L.; Foerster, B.; Fermin, R.; Speckter, H.; Vilchez, C.; Oviedo, J.; Stoeter, P.
2012-01-01
Summary The assessment of shunt reduction after an embolization of an arteriovenous malformation (AVM) or fistula (AVF) from conventional angiography is often difficult and may be subjective. Here we present a completely non-invasive method using magnetic resonance imaging (MRI) to measure shunt reduction. Using pulsed arterial spin labeling (PASL), we determined the relative amount of signal attributed to the shunt over 1.75 s and 6 different slices covering the lesion. This amount of signal from the shunt was related to the total signal from all slices and measured before and after embolization. The method showed a fair agreement between the PASL results and the judgement from conventional angiography. In the case of a total or subtotal shunt occlusion, PASL showed a shunt reduction between 69% and 92%, whereas in minimal shunt reduction as judged by conventional angiography, the ASL result was –6% (indicating slightly increased flow) to 35% in a partially occluded vein of Galen aneurysm. The PASL method proved to be fairly reproducible (up to 2% deviation between three measurements without interventions). On conclusion, PASL is able to reliably measure the amount of shunt reduction achieved by embolization of AVMs and AVFs PMID:22440600
Fetal head detection and measurement in ultrasound images by an iterative randomized Hough transform
NASA Astrophysics Data System (ADS)
Lu, Wei; Tan, Jinglu; Floyd, Randall C.
2004-05-01
This paper describes an automatic method for measuring the biparietal diameter (BPD) and head circumference (HC) in ultrasound fetal images. A total of 217 ultrasound images were segmented by using a K-Mean classifier, and the head skull was detected in 214 of the 217 cases by an iterative randomized Hough transform developed for detection of incomplete curves in images with strong noise without user intervention. The automatic measurements were compared with conventional manual measurements by sonographers and a trained panel. The inter-run variations and differences between the automatic and conventional measurements were small compared with published inter-observer variations. The results showed that the automated measurements were as reliable as the expert measurements and more consistent. This method has great potential in clinical applications.
Kitamura, Kei-Ichiro; Zhu, Xin; Chen, Wenxi; Nemoto, Tetsu
2010-01-01
The conventional zero-heat-flow thermometer, which measures the deep body temperature from the skin surface, is widely used at present. However, this thermometer requires considerable electricity to power the electric heater that compensates for heat loss from the probe; thus, AC power is indispensable for its use. Therefore, this conventional thermometer is inconvenient for unconstrained monitoring. We have developed a new dual-heat-flux method that can measure the deep body temperature from the skin surface without a heater. Our method is convenient for unconstrained and long-term measurement because the instrument is driven by a battery and its design promotes energy conservation. Its probe consists of dual-heat-flow channels with different thermal resistances, and each heat-flow-channel has a pair of IC sensors attached on its top and bottom. The average deep body temperature measurements taken using both the dual-heat-flux and then the zero-heat-flow thermometers from the foreheads of 17 healthy subjects were 37.08 degrees C and 37.02 degrees C, respectively. In addition, the correlation coefficient between the values obtained by the 2 methods was 0.970 (p<0.001). These results show that our method can be used for monitoring the deep body temperature as accurately as the conventional method, and it overcomes the disadvantage of the necessity of AC power supply. (c) 2009 IPEM. Published by Elsevier Ltd. All rights reserved.
Photographic films as remote sensors for measuring albedos of terrestrial surfaces
NASA Technical Reports Server (NTRS)
Pease, S. R.; Pease, R. W.
1972-01-01
To test the feasibility of remotely measuring the albedos of terrestrial surfaces from photographic images, an inquiry was carried out at ground level using several representative common surface targets. Problems of making such measurements with a spectrally selective sensor, such as photographic film, have been compared to previous work utilizing silicon cells. Two photographic approaches have been developed: a multispectral method which utilizes two or three photographic images made through conventional multispectral filters and a single shot method which utilizes the broad spectral sensitivity of black and white infrared film. Sensitometry related to the methods substitutes a Log Albedo scale for the conventional Log Exposure for creating characteristic curves. Certain constraints caused by illumination goemetry are discussed.
Perez, Aurora; Hernández, Rebeca; Velasco, Diego; Voicu, Dan; Mijangos, Carmen
2015-03-01
Microfluidic techniques are expected to provide narrower particle size distribution than conventional methods for the preparation of poly (lactic-co-glycolic acid) (PLGA) microparticles. Besides, it is hypothesized that the particle size distribution of poly (lactic-co-glycolic acid) microparticles influences the settling behavior and rheological properties of its aqueous dispersions. For the preparation of PLGA particles, two different methods, microfluidic and conventional oil-in-water emulsification methods were employed. The particle size and particle size distribution of PLGA particles prepared by microfluidics were studied as a function of the flow rate of the organic phase while particles prepared by conventional methods were studied as a function of stirring rate. In order to study the stability and structural organization of colloidal dispersions, settling experiments and oscillatory rheological measurements were carried out on aqueous dispersions of PLGA particles with different particle size distributions. Microfluidics technique allowed the control of size and size distribution of the droplets formed in the process of emulsification. This resulted in a narrower particle size distribution for samples prepared by MF with respect to samples prepared by conventional methods. Polydisperse samples showed a larger tendency to aggregate, thus confirming the advantages of microfluidics over conventional methods, especially if biomedical applications are envisaged. Copyright © 2014 Elsevier Inc. All rights reserved.
Code of Federal Regulations, 2010 CFR
2010-01-01
... pressure of 30 inches of mercury (101.6 kPa) (density of mercury equals 13.595 grams per cubic centimeter...—Volume of Gas Consumed W—Weight of Test Block 2. Test Conditions 2.1Installation. A free standing kitchen.... 2.9.2Gas Measurements. 2.9.2.1Positive displacement meters. The gas meter to be used for measuring...
Héctor García-Gomez; Sheila Izquieta-Rojano; Laura Aguillaume; Ignacio González-Fernández; Fernando Valiño; David Elustondo; Jesús M. Santamaría; Anna Àvila; Mark E. Fenn; Rocío Alonso
2016-01-01
Atmospheric nitrogen deposition is one of the main threats for biodiversity and ecosystem functioning. Measurement techniques like ion-exchange resin collectors (IECs), which are less expensive and time-consuming than conventional methods, are gaining relevance in the study of atmospheric deposition and are recommended to expand monitoring networks. In the present work...
Vaidya, Sharad; Parkash, Hari; Bhargava, Akshay; Gupta, Sharad
2014-01-01
Abundant resources and techniques have been used for complete coverage crown fabrication. Conventional investing and casting procedures for phosphate-bonded investments require a 2- to 4-h procedure before completion. Accelerated casting techniques have been used, but may not result in castings with matching marginal accuracy. The study measured the marginal gap and determined the clinical acceptability of single cast copings invested in a phosphate-bonded investment with the use of conventional and accelerated methods. One hundred and twenty cast coping samples were fabricated using conventional and accelerated methods, with three finish lines: Chamfer, shoulder and shoulder with bevel. Sixty copings were prepared with each technique. Each coping was examined with a stereomicroscope at four predetermined sites and measurements of marginal gaps were documented for each. A master chart was prepared for all the data and was analyzed using Statistical Package for the Social Sciences version. Evidence of marginal gap was then evaluated by t-test. Analysis of variance and Post-hoc analysis were used to compare two groups as well as to make comparisons between three subgroups . Measurements recorded showed no statistically significant difference between conventional and accelerated groups. Among the three marginal designs studied, shoulder with bevel showed the best marginal fit with conventional as well as accelerated casting techniques. Accelerated casting technique could be a vital alternative to the time-consuming conventional casting technique. The marginal fit between the two casting techniques showed no statistical difference.
Microbial Burden Approach : New Monitoring Approach for Measuring Microbial Burden
NASA Technical Reports Server (NTRS)
Venkateswaran, Kasthuri; Vaishampayan, Parag; Barmatz, Martin
2013-01-01
Advantages of new approach for differentiating live cells/ spores from dead cells/spores. Four examples of Salmonella outbreaks leading to costly destruction of dairy products. List of possible collaboration activities between JPL and other industries (for future discussion). Limitations of traditional microbial monitoring approaches. Introduction to new approach for rapid measurement of viable (live) bacterial cells/spores and its areas of application. Detailed example for determining live spores using new approach (similar procedure for determining live cells). JPL has developed a patented approach for measuring amount of live and dead cells/spores. This novel "molecular" method takes less than 5 to 7 hrs. compared to the seven days required using conventional techniques. Conventional "molecular" techniques can not discriminate live cells/spores among dead cells/spores. The JPL-developed novel method eliminates false positive results obtained from conventional "molecular" techniques that lead to unnecessary delay in the processing and to unnecessary destruction of food products.
NASA Astrophysics Data System (ADS)
Dehkordi, N. Mahdian; Sadati, N.; Hamzeh, M.
2017-09-01
This paper presents a robust dc-link voltage as well as a current control strategy for a bidirectional interlink converter (BIC) in a hybrid ac/dc microgrid. To enhance the dc-bus voltage control, conventional methods strive to measure and feedforward the load or source power in the dc-bus control scheme. However, the conventional feedforward-based approaches require remote measurement with communications. Moreover, conventional methods suffer from stability and performance issues, mainly due to the use of the small-signal-based control design method. To overcome these issues, in this paper, the power from DG units of the dc subgrid imposed on the BIC is considered an unmeasurable disturbance signal. In the proposed method, in contrast to existing methods, using the nonlinear model of BIC, a robust controller that does not need the remote measurement with communications effectively rejects the impact of the disturbance signal imposed on the BIC's dc-link voltage. To avoid communication links, the robust controller has a plug-and-play feature that makes it possible to add a DG/load to or remove it from the dc subgrid without distorting the hybrid microgrid stability. Finally, Monte Carlo simulations are conducted to confirm the effectiveness of the proposed control strategy in MATLAB/SimPowerSystems software environment.
Ptosis assessment spectacles: a new method of measuring lid position and movement in children.
Khandwala, Mona; Dey, Sarju; Harcourt, Cassie; Wood, Clive; Jones, Carole A
2011-01-01
Accurate assessment of eyelid position and movement is vital in planning the surgical correction of ptosis. Conventional measurements taken using a millimeter ruler are considered the gold standard, although in young children this can be a difficult procedure. The authors have designed ptosis assessment spectacles with a measuring millimeter scale marked on the center of the lens to facilitate accurate assessment of eyelid position and function in children. The purpose of the study was to assess the accuracy and reproducibility of eyelid measurement using these ptosis assessment spectacles. Fifty-two children aged 2-12 years were recruited in this study. Each child underwent 2 sets of measurements. The first was undertaken by an ophthalmologist in the conventional manner using a ruler, and the second set made with ptosis assessment spectacles. On each occasion the palpebral aperture, skin crease, and levator function were recorded in millimeters. A verbal analog scale was used to assess parent satisfaction with each method. Clinically acceptable reproducibility was shown with the ruler and the spectacles for all measurements: palpebral aperture, skin crease, and levator function. Parents significantly preferred the glasses for measurement, as compared with the ruler (p < 0.05). The spectacles are as accurate as conventional methods of measurement, but are easier to use. Children tolerate these spectacles well, and most parents preferred them to the ruler.
Matsui, Shogo; Kajikawa, Masato; Maruhashi, Tatsuya; Hashimoto, Haruki; Kihara, Yasuki; Chayama, Kazuaki; Goto, Chikara; Aibara, Yoshiki; Yusoff, Farina Mohamad; Kishimoto, Shinji; Nakashima, Ayumu; Noma, Kensuke; Kawaguchi, Tomohiro; Matsumoto, Takeo; Higashi, Yukihito
2018-05-04
Measurement of flow-mediated vasodilation (FMD) is an established method for assessing endothelial function. Measurement of FMD is useful for showing the relationship between atherosclerosis and endothelial function, mechanisms of endothelial dysfunction, and clinical implications including effects of interventions and cardiovascular events. To shorten and simplify the measurement of FMD, we have developed a novel technique named short time FMD (stFMD). We investigated the validity of stFMD for assessment of endothelial function compared with conventional FMD. We evaluated stFMD and conventional FMD in 82 subjects including patients with atherosclerotic risk factors and cardiovascular disease (66 men and 16 women, 57 ± 16 years). Both stFMD and conventional FMD were significantly correlated with age, systolic blood pressure, diastolic blood pressure and baseline brachial artery diameter. In addition, stFMD was significantly correlated with conventional FMD (r = 0.76, P < 0.001). Bland-Altman plot analysis showed good agreement between stFMD and conventional FMD. Moreover, stFMD in the at risk group and that in the cardiovascular disease group were significantly lower than that in the no risk group (4.6 ± 2.3% and 4.4 ± 2.2% vs. 7.3 ± 1.9%, P < 0.001, respectively). Optimal cutoff value of stFMD for diagnosing atherosclerosis was 7.0% (sensitivity of 71.0% and specificity of 85.0%). These findings suggest that measurement of stFMD, a novel and simple method, is useful for assessing endothelial function. Measurement of stFMD may be suitable for screening of atherosclerosis when repeated measurements of vascular function are required and when performing a clinical trial using a large population. URL for Clinical Trial: http://UMIN; Registration Number for Clinical Trial: UMIN000025458. Copyright © 2017 Elsevier B.V. All rights reserved.
Quick, Jacob A; MacIntyre, Allan D; Barnes, Stephen L
2014-02-01
Surgical airway creation has a high potential for disaster. Conventional methods can be cumbersome and require special instruments. A simple method utilizing three steps and readily available equipment exists, but has yet to be adequately tested. Our objective was to compare conventional cricothyroidotomy with the three-step method utilizing high-fidelity simulation. Utilizing a high-fidelity simulator, 12 experienced flight nurses and paramedics performed both methods after a didactic lecture, simulator briefing, and demonstration of each technique. Six participants performed the three-step method first, and the remaining 6 performed the conventional method first. Each participant was filmed and timed. We analyzed videos with respect to the number of hand repositions, number of airway instrumentations, and technical complications. Times to successful completion were measured from incision to balloon inflation. The three-step method was completed faster (52.1 s vs. 87.3 s; p = 0.007) as compared with conventional surgical cricothyroidotomy. The two methods did not differ statistically regarding number of hand movements (3.75 vs. 5.25; p = 0.12) or instrumentations of the airway (1.08 vs. 1.33; p = 0.07). The three-step method resulted in 100% successful airway placement on the first attempt, compared with 75% of the conventional method (p = 0.11). Technical complications occurred more with the conventional method (33% vs. 0%; p = 0.05). The three-step method, using an elastic bougie with an endotracheal tube, was shown to require fewer total hand movements, took less time to complete, resulted in more successful airway placement, and had fewer complications compared with traditional cricothyroidotomy. Published by Elsevier Inc.
Comparing 3D foot scanning with conventional measurement methods.
Lee, Yu-Chi; Lin, Gloria; Wang, Mao-Jiun J
2014-01-01
Foot dimension information on different user groups is important for footwear design and clinical applications. Foot dimension data collected using different measurement methods presents accuracy problems. This study compared the precision and accuracy of the 3D foot scanning method with conventional foot dimension measurement methods including the digital caliper, ink footprint and digital footprint. Six commonly used foot dimensions, i.e. foot length, ball of foot length, outside ball of foot length, foot breadth diagonal, foot breadth horizontal and heel breadth were measured from 130 males and females using four foot measurement methods. Two-way ANOVA was performed to evaluate the sex and method effect on the measured foot dimensions. In addition, the mean absolute difference values and intra-class correlation coefficients (ICCs) were used for precision and accuracy evaluation. The results were also compared with the ISO 20685 criteria. The participant's sex and the measurement method were found (p < 0.05) to exert significant effects on the measured six foot dimensions. The precision of the 3D scanning measurement method with mean absolute difference values between 0.73 to 1.50 mm showed the best performance among the four measurement methods. The 3D scanning measurements showed better measurement accuracy performance than the other methods (mean absolute difference was 0.6 to 4.3 mm), except for measuring outside ball of foot length and foot breadth horizontal. The ICCs for all six foot dimension measurements among the four measurement methods were within the 0.61 to 0.98 range. Overall, the 3D foot scanner is recommended for collecting foot anthropometric data because it has relatively higher precision, accuracy and robustness. This finding suggests that when comparing foot anthropometric data among different references, it is important to consider the differences caused by the different measurement methods.
The Use of an Intra-Articular Depth Guide in the Measurement of Partial Thickness Rotator Cuff Tears
Carroll, Michael J.; More, Kristie D.; Sohmer, Stephen; Nelson, Atiba A.; Sciore, Paul; Boorman, Richard; Hollinshead, Robert; Lo, Ian K. Y.
2013-01-01
Purpose. The purpose of this study was to compare the accuracy of the conventional method for determining the percentage of partial thickness rotator cuff tears to a method using an intra-articular depth guide. The clinical utility of the intra-articular depth guide was also examined. Methods. Partial rotator cuff tears were created in cadaveric shoulders. Exposed footprint, total tendon thickness, and percentage of tendon thickness torn were determined using both techniques. The results from the conventional and intra-articular depth guide methods were correlated with the true anatomic measurements. Thirty-two patients were evaluated in the clinical study. Results. Estimates of total tendon thickness (r = 0.41, P = 0.31) or percentage of thickness tears (r = 0.67, P = 0.07) using the conventional method did not correlate well with true tendon thickness. Using the intra-articular depth guide, estimates of exposed footprint (r = 0.92, P = 0.001), total tendon thickness (r = 0.96, P = 0.0001), and percentage of tendon thickness torn (r = 0.88, P = 0.004) correlated with true anatomic measurements. Seven of 32 patients had their treatment plan altered based on the measurements made by the intra-articular depth guide. Conclusions. The intra-articular depth guide appeared to better correlate with true anatomic measurements. It may be useful during the evaluation and development of treatment plans for partial thickness articular surface rotator cuff tears. PMID:23533789
Yan, Liang; Zhu, Bo; Jiao, Zongxia; Chen, Chin-Yin; Chen, I-Ming
2014-10-24
An orientation measurement method based on Hall-effect sensors is proposed for permanent magnet (PM) spherical actuators with three-dimensional (3D) magnet array. As there is no contact between the measurement system and the rotor, this method could effectively avoid friction torque and additional inertial moment existing in conventional approaches. Curved surface fitting method based on exponential approximation is proposed to formulate the magnetic field distribution in 3D space. The comparison with conventional modeling method shows that it helps to improve the model accuracy. The Hall-effect sensors are distributed around the rotor with PM poles to detect the flux density at different points, and thus the rotor orientation can be computed from the measured results and analytical models. Experiments have been conducted on the developed research prototype of the spherical actuator to validate the accuracy of the analytical equations relating the rotor orientation and the value of magnetic flux density. The experimental results show that the proposed method can measure the rotor orientation precisely, and the measurement accuracy could be improved by the novel 3D magnet array. The study result could be used for real-time motion control of PM spherical actuators.
Aoyama, Toru; Fujikawa, Hirohito; Cho, Haruhiko; Ogata, Takashi; Shirai, Junya; Hayashi, Tsutomu; Rino, Yasushi; Masuda, Munetaka; Oba, Mari S; Morita, Satoshi; Yoshikawa, Takaki
2015-02-01
Harvesting lymph nodes (LNs) after gastrectomy is essential for accurate staging. This trial evaluated the efficiency and quality of a conventional method and a methylene blue-assisted method in a randomized manner. The key eligibility criteria were as follows: (i) histologically proven adenocarcinoma of the stomach; (ii) clinical stage I-III; (iii) R0 resection planned by gastrectomy with D1+ or D2 lymphadenectomy. The primary endpoint was the ratio of the pathologic number of harvested LNs per time (minutes) as an efficacy measure. The secondary endpoint was the number of harvested LNs, as a quality measure. Between August 2012 and December 2012, 60 patients were assigned to undergo treatment using the conventional method (n=29) and the methylene blue dye method (n=31). The baseline demographics were mostly well balanced between the 2 groups. The number of harvested LNs (mean±SD) was 33.6±11.9 in the conventional arm and 43.4±13.9 in the methylene blue arm (P=0.005). The ratio of the number of the harvested LNs per time was 1.12±0.46 LNs/min in the conventional arm and 1.49±0.59 LNs/min in the methylene blue arm (P=0.010). In the subgroup analyses, the quality and efficacy were both superior for the methylene blue dye method compared with the conventional method. The methylene blue technique is recommended for harvesting LNs during gastric cancer surgery on the basis of both the quality and efficacy.
Aoyama, Toru; Fujikawa, Hirohito; Cho, Haruhiko; Ogata, Takashi; Shirai, Junya; Hayashi, Tsutomu; Rino, Yasushi; Masuda, Munetaka; Oba, Mari S.; Morita, Satoshi
2015-01-01
Harvesting lymph nodes (LNs) after gastrectomy is essential for accurate staging. This trial evaluated the efficiency and quality of a conventional method and a methylene blue–assisted method in a randomized manner. The key eligibility criteria were as follows: (i) histologically proven adenocarcinoma of the stomach; (ii) clinical stage I-III; (iii) R0 resection planned by gastrectomy with D1+ or D2 lymphadenectomy. The primary endpoint was the ratio of the pathologic number of harvested LNs per time (minutes) as an efficacy measure. The secondary endpoint was the number of harvested LNs, as a quality measure. Between August 2012 and December 2012, 60 patients were assigned to undergo treatment using the conventional method (n=29) and the methylene blue dye method (n=31). The baseline demographics were mostly well balanced between the 2 groups. The number of harvested LNs (mean±SD) was 33.6±11.9 in the conventional arm and 43.4±13.9 in the methylene blue arm (P=0.005). The ratio of the number of the harvested LNs per time was 1.12±0.46 LNs/min in the conventional arm and 1.49±0.59 LNs/min in the methylene blue arm (P=0.010). In the subgroup analyses, the quality and efficacy were both superior for the methylene blue dye method compared with the conventional method. The methylene blue technique is recommended for harvesting LNs during gastric cancer surgery on the basis of both the quality and efficacy. PMID:25356528
Akay, Erdem; Yilmaz, Cagatay; Kocaman, Esat S; Turkmen, Halit S; Yildiz, Mehmet
2016-09-19
The significance of strain measurement is obvious for the analysis of Fiber-Reinforced Polymer (FRP) composites. Conventional strain measurement methods are sufficient for static testing in general. Nevertheless, if the requirements exceed the capabilities of these conventional methods, more sophisticated techniques are necessary to obtain strain data. Fiber Bragg Grating (FBG) sensors have many advantages for strain measurement over conventional ones. Thus, the present paper suggests a novel method for biaxial strain measurement using embedded FBG sensors during the fatigue testing of FRP composites. Poisson's ratio and its reduction were monitored for each cyclic loading by using embedded FBG sensors for a given specimen and correlated with the fatigue stages determined based on the variations of the applied fatigue loading and temperature due to the autogenous heating to predict an oncoming failure of the continuous fiber-reinforced epoxy matrix composite specimens under fatigue loading. The results show that FBG sensor technology has a remarkable potential for monitoring the evolution of Poisson's ratio on a cycle-by-cycle basis, which can reliably be used towards tracking the fatigue stages of composite for structural health monitoring purposes.
Akay, Erdem; Yilmaz, Cagatay; Kocaman, Esat S.; Turkmen, Halit S.; Yildiz, Mehmet
2016-01-01
The significance of strain measurement is obvious for the analysis of Fiber-Reinforced Polymer (FRP) composites. Conventional strain measurement methods are sufficient for static testing in general. Nevertheless, if the requirements exceed the capabilities of these conventional methods, more sophisticated techniques are necessary to obtain strain data. Fiber Bragg Grating (FBG) sensors have many advantages for strain measurement over conventional ones. Thus, the present paper suggests a novel method for biaxial strain measurement using embedded FBG sensors during the fatigue testing of FRP composites. Poisson’s ratio and its reduction were monitored for each cyclic loading by using embedded FBG sensors for a given specimen and correlated with the fatigue stages determined based on the variations of the applied fatigue loading and temperature due to the autogenous heating to predict an oncoming failure of the continuous fiber-reinforced epoxy matrix composite specimens under fatigue loading. The results show that FBG sensor technology has a remarkable potential for monitoring the evolution of Poisson’s ratio on a cycle-by-cycle basis, which can reliably be used towards tracking the fatigue stages of composite for structural health monitoring purposes. PMID:28773901
Choi, Ki Hwan; Chung, Song Ee; Chung, Tae Young; Chung, Eui Sang
2007-04-01
To assess the efficacy of the ultrasound biomicroscopic (UBM) method in estimating the sulcus-to-sulcus horizontal diameter for Visian Implantable Contact Lens (ICL, model V4) length determination to obtain optimal ICL vault. The results of postoperative ICL vaults in 30 eyes of 18 patients were retrospectively analyzed. In 17 eyes, ICL length was determined using the conventional method, and in 13 eyes, ICL length was determined using the UBM method. The UBM method was carried out by measuring the sulcus to limbus distance on each side by 50 MHz UBM and adding the white-to-white diameter by caliper or Orbscan. The ICL vaults were measured using the UBM method at 1 and 6 months postoperatively and the results were compared between the two groups. Ideal ICL vault was defined as vault between 250 and 750 microm. The relation between the ICL vault, footplate location, and ICL power was also investigated. In the UBM method group, ICL vault was within the ideal range in all 13 (100%) eyes at 1 and 6 months postoperatively, whereas in the conventional method group, 10 (58.8%) eyes showed ideal vault at 1 month postoperatively (P = .01) and 9 (52.9%) eyes showed ideal vault at 6 months postoperatively (P < .01). The ideal ICL footplate location was achieved in the ciliary sulcus in 11 (84.6%) eyes of the UBM method group and 10 (64.7%) eyes of the conventional method group. However, the differences between the two groups were not statistically significant. The ICL vault was not significantly affected by the ICL power. Implantable Contact Lens length determined by the UBM method achieved significantly more ideal ICL vault than that of the conventional white-to-white method. The UBM method is superior to the conventional method in terms of predicting the sulcus-to-sulcus horizontal diameter for ICL length determination.
[Evaluation of Wits appraisal with superimposition method].
Xu, T; Ahn, J; Baumrind, S
1999-07-01
To compare the conventional Wits appraisal with superimposed Wits appraisal in evaluation of sagittal jaw relationship change between pre and post orthodontic treatment. The sample consists of 48-case pre and post treatment lateral head films. Computerized digitizing is used to get the cephalometric landmarks and measure conventional Wits value, superimposed Wits value and ANB angle. The correlation analysis among these three measures was done by SAS statistical package. The change of ANB angle has higher correlation with the change of superimposed Wits than that of the conventional Wits. The r-value is as high as 0.849 (P < 0.001). The superimposed Wits appraisal reflects the change of sagittal jaw relationship more objectively than the conventional one.
Modified coaxial wire method for measurement of transfer impedance of beam position monitors
NASA Astrophysics Data System (ADS)
Kumar, Mukesh; Babbar, L. K.; Deo, R. K.; Puntambekar, T. A.; Senecha, V. K.
2018-05-01
The transfer impedance is a very important parameter of a beam position monitor (BPM) which relates its output signal with the beam current. The coaxial wire method is a standard technique to measure transfer impedance of the BPM. The conventional coaxial wire method requires impedance matching between coaxial wire and external circuits (vector network analyzer and associated cables). This paper presents a modified coaxial wire method for bench measurement of the transfer impedance of capacitive pickups like button electrodes and shoe box BPMs. Unlike the conventional coaxial wire method, in the modified coaxial wire method no impedance matching elements have been used between the device under test and the external circuit. The effect of impedance mismatch has been solved mathematically and a new expression of transfer impedance has been derived. The proposed method is verified through simulation of a button electrode BPM using cst studio suite. The new method is also applied to measure transfer impedance of a button electrode BPM developed for insertion devices of Indus-2 and the results are also compared with its simulations. Close agreement between measured and simulation results suggests that the modified coaxial wire setup can be exploited for the measurement of transfer impedance of capacitive BPMs like button electrodes and shoe box BPM.
Instruments for measuring the amount of moisture in the air
NASA Technical Reports Server (NTRS)
Johnson, D. L.
1978-01-01
A summarization and discussion of the many systems available for measuring moisture in the atmosphere is presented. Conventional methods used in the field of meteorology and methods used in the laboratory are discussed. Performance accuracies, and response of the instruments were reviewed as well as the advantages and disadvantages of each. Methods of measuring humidity aloft by instrumentation onboard aircraft and balloons are given, in addition to the methods used to measure moisture at the Earth's surface.
Quantification of protein interaction kinetics in a micro droplet
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yin, L. L.; College of Chemistry and Chemical Engineering, Chongqing University, Chongqing 400044; Wang, S. P., E-mail: shaopeng.wang@asu.edu, E-mail: njtao@asu.edu
Characterization of protein interactions is essential to the discovery of disease biomarkers, the development of diagnostic assays, and the screening for therapeutic drugs. Conventional flow-through kinetic measurements need relative large amount of sample that is not feasible for precious protein samples. We report a novel method to measure protein interaction kinetics in a single droplet with sub microliter or less volume. A droplet in a humidity-controlled environmental chamber is replacing the microfluidic channels as the reactor for the protein interaction. The binding process is monitored by a surface plasmon resonance imaging (SPRi) system. Association curves are obtained from the averagemore » SPR image intensity in the center area of the droplet. The washing step required by conventional flow-through SPR method is eliminated in the droplet method. The association and dissociation rate constants and binding affinity of an antigen-antibody interaction are obtained by global fitting of association curves at different concentrations. The result obtained by this method is accurate as validated by conventional flow-through SPR system. This droplet-based method not only allows kinetic studies for proteins with limited supply but also opens the door for high-throughput protein interaction study in a droplet-based microarray format that enables measurement of many to many interactions on a single chip.« less
Quantification of protein interaction kinetics in a micro droplet
NASA Astrophysics Data System (ADS)
Yin, L. L.; Wang, S. P.; Shan, X. N.; Zhang, S. T.; Tao, N. J.
2015-11-01
Characterization of protein interactions is essential to the discovery of disease biomarkers, the development of diagnostic assays, and the screening for therapeutic drugs. Conventional flow-through kinetic measurements need relative large amount of sample that is not feasible for precious protein samples. We report a novel method to measure protein interaction kinetics in a single droplet with sub microliter or less volume. A droplet in a humidity-controlled environmental chamber is replacing the microfluidic channels as the reactor for the protein interaction. The binding process is monitored by a surface plasmon resonance imaging (SPRi) system. Association curves are obtained from the average SPR image intensity in the center area of the droplet. The washing step required by conventional flow-through SPR method is eliminated in the droplet method. The association and dissociation rate constants and binding affinity of an antigen-antibody interaction are obtained by global fitting of association curves at different concentrations. The result obtained by this method is accurate as validated by conventional flow-through SPR system. This droplet-based method not only allows kinetic studies for proteins with limited supply but also opens the door for high-throughput protein interaction study in a droplet-based microarray format that enables measurement of many to many interactions on a single chip.
USDA-ARS?s Scientific Manuscript database
Stable hydrogen isotope methodology is used in nutrition studies to measure growth, breast milk intake, and energy requirement. Isotope ratio MS is the best instrumentation to measure the stable hydrogen isotope ratios in physiological fluids. Conventional methods to convert physiological fluids to ...
Apostol, Izydor; Kelner, Drew; Jiang, Xinzhao Grace; Huang, Gang; Wypych, Jette; Zhang, Xin; Gastwirt, Jessica; Chen, Kenneth; Fodor, Szilan; Hapuarachchi, Suminda; Meriage, Dave; Ye, Frank; Poppe, Leszek; Szpankowski, Wojciech
2012-12-01
To predict precision and other performance characteristics of chromatographic purity methods, which represent the most widely used form of analysis in the biopharmaceutical industry. We have conducted a comprehensive survey of purity methods, and show that all performance characteristics fall within narrow measurement ranges. This observation was used to develop a model called Uncertainty Based on Current Information (UBCI), which expresses these performance characteristics as a function of the signal and noise levels, hardware specifications, and software settings. We applied the UCBI model to assess the uncertainty of purity measurements, and compared the results to those from conventional qualification. We demonstrated that the UBCI model is suitable to dynamically assess method performance characteristics, based on information extracted from individual chromatograms. The model provides an opportunity for streamlining qualification and validation studies by implementing a "live validation" of test results utilizing UBCI as a concurrent assessment of measurement uncertainty. Therefore, UBCI can potentially mitigate the challenges associated with laborious conventional method validation and facilitates the introduction of more advanced analytical technologies during the method lifecycle.
ERIC Educational Resources Information Center
Naidu, S.
2007-01-01
Central to the argument about the influence of media on learning is how this influence is measured or ascertained. Conventional methods which comprise the use of true and quasi-experimental designs are inadequate. Several lessons can be learned from this observation on the media debate. The first is that, conventional methods of ascertaining the…
K-space data processing for magnetic resonance elastography (MRE).
Corbin, Nadège; Breton, Elodie; de Mathelin, Michel; Vappou, Jonathan
2017-04-01
Magnetic resonance elastography (MRE) requires substantial data processing based on phase image reconstruction, wave enhancement, and inverse problem solving. The objective of this study is to propose a new, fast MRE method based on MR raw data processing, particularly adapted to applications requiring fast MRE measurement or high elastogram update rate. The proposed method allows measuring tissue elasticity directly from raw data without prior phase image reconstruction and without phase unwrapping. Experimental feasibility is assessed both in a gelatin phantom and in the liver of a porcine model in vivo. Elastograms are reconstructed with the raw MRE method and compared to those obtained using conventional MRE. In a third experiment, changes in elasticity are monitored in real-time in a gelatin phantom during its solidification by using both conventional MRE and raw MRE. The raw MRE method shows promising results by providing similar elasticity values to the ones obtained with conventional MRE methods while decreasing the number of processing steps and circumventing the delicate step of phase unwrapping. Limitations of the proposed method are the influence of the magnitude on the elastogram and the requirement for a minimum number of phase offsets. This study demonstrates the feasibility of directly reconstructing elastograms from raw data.
2011-01-01
Background Fall events contribute significantly to mortality, morbidity and costs in our ageing population. In order to identify persons at risk and to target preventive measures, many scores and assessment tools have been developed. These often require expertise and are costly to implement. Recent research investigates the use of wearable inertial sensors to provide objective data on motion features which can be used to assess individual fall risk automatically. So far it is unknown how well this new method performs in comparison with conventional fall risk assessment tools. The aim of our research is to compare the predictive performance of our new sensor-based method with conventional and established methods, based on prospective data. Methods In a first study phase, 119 inpatients of a geriatric clinic took part in motion measurements using a wireless triaxial accelerometer during a Timed Up&Go (TUG) test and a 20 m walk. Furthermore, the St. Thomas Risk Assessment Tool in Falling Elderly Inpatients (STRATIFY) was performed, and the multidisciplinary geriatric care team estimated the patients' fall risk. In a second follow-up phase of the study, 46 of the participants were interviewed after one year, including a fall and activity assessment. The predictive performances of the TUG, the STRATIFY and team scores are compared. Furthermore, two automatically induced logistic regression models based on conventional clinical and assessment data (CONV) as well as sensor data (SENSOR) are matched. Results Among the risk assessment scores, the geriatric team score (sensitivity 56%, specificity 80%) outperforms STRATIFY and TUG. The induced logistic regression models CONV and SENSOR achieve similar performance values (sensitivity 68%/58%, specificity 74%/78%, AUC 0.74/0.72, +LR 2.64/2.61). Both models are able to identify more persons at risk than the simple scores. Conclusions Sensor-based objective measurements of motion parameters in geriatric patients can be used to assess individual fall risk, and our prediction model's performance matches that of a model based on conventional clinical and assessment data. Sensor-based measurements using a small wearable device may contribute significant information to conventional methods and are feasible in an unsupervised setting. More prospective research is needed to assess the cost-benefit relation of our approach. PMID:21711504
Leston, Alan R; Ollison, Will M
2017-11-01
Long-standing measurement techniques for determining ground-level ozone (O 3 ) and nitrogen dioxide (NO 2 ) are known to be biased by interfering compounds that result in overestimates of high O 3 and NO 2 ambient concentrations under conducive conditions. An increasing near-ground O 3 gradient (NGOG) with increasing height above ground level is also known to exist. Both the interference bias and NGOG were investigated by comparing data from a conventional Federal Equivalent Method (FEM) O 3 photometer and an identical monitor upgraded with an "interference-free" nitric oxide O 3 scrubber that alternatively sampled at 2 m and 6.2 m inlet heights above ground level (AGL). Intercomparison was also made between a conventional nitrogen oxide (NO x ) chemiluminescence Federal Reference Method (FRM) monitor and a new "direct-measure" NO 2 NO x 405 nm photometer at a near-road air quality measurement site. Results indicate that the O 3 monitor with the upgraded scrubber recorded lower regulatory-oriented concentrations than the deployed conventional metal oxide-scrubbed monitor and that O 3 concentrations 6.2 m AGL were higher than concentrations 2.0 m AGL, the nominal nose height of outdoor populations. Also, a new direct-measure NO 2 photometer recorded generally lower NO 2 regulatory-oriented concentrations than the conventional FRM chemiluminescence monitor, reporting lower daily maximum hourly average concentrations than the conventional monitor about 3 of every 5 days. Employing bias-prone instruments for measurement of ambient ozone or nitrogen dioxide from inlets at inappropriate heights above ground level may result in collection of positively biased data. This paper discusses tests of new regulatory instruments, recent developments in bias-free ozone and nitrogen dioxide measurement technology, and the presence/extent of a near-ground O 3 gradient (NGOG). Collection of unbiased monitor inlet height-appropriate data is crucial for determining accurate design values and meeting National Ambient Air Quality Standards.
Khalid, Ashiq Hussain; Kontis, Konstantinos
2008-01-01
This paper reviews the state of phosphor thermometry, focusing on developments in the past 15 years. The fundamental principles and theory are presented, and the various spectral and temporal modes, including the lifetime decay, rise time and intensity ratio, are discussed. The entire phosphor measurement system, including relative advantages to conventional methods, choice of phosphors, bonding techniques, excitation sources and emission detection, is reviewed. Special attention is given to issues that may arise at high temperatures. A number of recent developments and applications are surveyed, with examples including: measurements in engines, hypersonic wind tunnel experiments, pyrolysis studies and droplet/spray/gas temperature determination. They show the technique is flexible and successful in measuring temperatures where conventional methods may prove to be unsuitable. PMID:27873836
NASA Astrophysics Data System (ADS)
Kurnia, H.; Noerhadi, N. A. I.
2017-08-01
Three-dimensional digital study models were introduced following advances in digital technology. This study was carried out to assess the reliability of digital study models scanned by a laser scanning device newly assembled. The aim of this study was to compare the digital study models and conventional models. Twelve sets of dental impressions were taken from patients with mild-to-moderate crowding. The impressions were taken twice, one with alginate and the other with polyvinylsiloxane. The alginate impressions were made into conventional models, and the polyvinylsiloxane impressions were scanned to produce digital models. The mesiodistal tooth width and Little’s irregularity index (LII) were measured manually with digital calipers on the conventional models and digitally on the digital study models. Bolton analysis was performed on each study models. Each method was carried out twice to check for intra-observer variability. The reproducibility (comparison of the methods) was assessed using independent-sample t-tests. The mesiodistal tooth width between conventional and digital models did not significantly differ (p > 0.05). Independent-sample t-tests did not identify statistically significant differences for Bolton analysis and LII (p = 0.603 for Bolton and p = 0894 for LII). The measurements of the digital study models are as accurate as those of the conventional models.
Subliminal or not? Comparing null-hypothesis and Bayesian methods for testing subliminal priming.
Sand, Anders; Nilsson, Mats E
2016-08-01
A difficulty for reports of subliminal priming is demonstrating that participants who actually perceived the prime are not driving the priming effects. There are two conventional methods for testing this. One is to test whether a direct measure of stimulus perception is not significantly above chance on a group level. The other is to use regression to test if an indirect measure of stimulus processing is significantly above zero when the direct measure is at chance. Here we simulated samples in which we assumed that only participants who perceived the primes were primed by it. Conventional analyses applied to these samples had a very large error rate of falsely supporting subliminal priming. Calculating a Bayes factor for the samples very seldom falsely supported subliminal priming. We conclude that conventional tests are not reliable diagnostics of subliminal priming. Instead, we recommend that experimenters calculate a Bayes factor when investigating subliminal priming. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
Speksnijder, L; Rousian, M; Steegers, E A P; Van Der Spek, P J; Koning, A H J; Steensma, A B
2012-07-01
Virtual reality is a novel method of visualizing ultrasound data with the perception of depth and offers possibilities for measuring non-planar structures. The levator ani hiatus has both convex and concave aspects. The aim of this study was to compare levator ani hiatus volume measurements obtained with conventional three-dimensional (3D) ultrasound and with a virtual reality measurement technique and to establish their reliability and agreement. 100 symptomatic patients visiting a tertiary pelvic floor clinic with a normal intact levator ani muscle diagnosed on translabial ultrasound were selected. Datasets were analyzed using a rendered volume with a slice thickness of 1.5 cm at the level of minimal hiatal dimensions during contraction. The levator area (in cm(2)) was measured and multiplied by 1.5 to get the levator ani hiatus volume in conventional 3D ultrasound (in cm(3)). Levator ani hiatus volume measurements were then measured semi-automatically in virtual reality (cm(3) ) using a segmentation algorithm. An intra- and interobserver analysis of reliability and agreement was performed in 20 randomly chosen patients. The mean difference between levator ani hiatus volume measurements performed using conventional 3D ultrasound and virtual reality was 0.10 (95% CI, - 0.15 to 0.35) cm(3). The intraclass correlation coefficient (ICC) comparing conventional 3D ultrasound with virtual reality measurements was > 0.96. Intra- and interobserver ICCs for conventional 3D ultrasound measurements were > 0.94 and for virtual reality measurements were > 0.97, indicating good reliability for both. Levator ani hiatus volume measurements performed using virtual reality were reliable and the results were similar to those obtained with conventional 3D ultrasonography. Copyright © 2012 ISUOG. Published by John Wiley & Sons, Ltd.
Monitoring beach changes using GPS surveying techniques
Morton, Robert; Leach, Mark P.; Paine, Jeffrey G.; Cardoza, Michael A.
1993-01-01
The adaptation of Global Positioning System (GPS) surveying techniques to beach monitoring activities is a promising response to this challenge. An experiment that employed both GPS and conventional beach surveying was conducted, and a new beach monitoring method employing kinematic GPS surveys was devised. This new method involves the collection of precise shore-parallel and shore-normal GPS positions from a moving vehicle so that an accurate two-dimensional beach surface can be generated. Results show that the GPS measurements agree with conventional shore-normal surveys at the 1 cm level, and repeated GPS measurements employing the moving vehicle demonstrate a precision of better than 1 cm. In addition, the nearly continuous sampling and increased resolution provided by the GPS surveying technique reveals alongshore changes in beach morphology that are undetected by conventional shore-normal profiles. The application of GPS surveying techniques combined with the refinement of appropriate methods for data collection and analysis provides a better understanding of beach changes, sediment transport, and storm impacts.
A new tritiated water measurement method with plastic scintillator pellets.
Furuta, Etsuko; Iwasaki, Noriko; Kato, Yuka; Tomozoe, Yusuke
2016-01-01
A new tritiated water measurement method with plastic scintillator pellets (PS-pellets) by using a conventional liquid scintillation counter was developed. The PS-pellets used were 3 mm in both diameter and length. A low potassium glass vial was filled full with the pellets, and tritiated water was applied to the vial from 5 to 100 μl. Then, the sample solution was scattered in the interstices of the pellets in a vial. This method needs no liquid scintillator, so no liquid organic waste fluid is generated. The counting efficiency with the pellets was approximately 48 % when a 5 μl solution was used, which was higher than that of conventional measurement using liquid scintillator. The relationship between count rate and activity showed good linearity. The pellets were able to be used repeatedly, so few solid wastes are generated with this method. The PS-pellets are useful for tritiated water measurement; however, it is necessary to develop a new device which can be applied to a larger volume and measure low level concentration like an environmental application.
Ahn, T; Moon, S; Youk, Y; Jung, Y; Oh, K; Kim, D
2005-05-30
A novel mode analysis method and differential mode delay (DMD) measurement technique for a multimode optical fiber based on optical frequency domain reflectometry has been proposed for the first time. We have used a conventional OFDR with a tunable external cavity laser and a Michelson interferometer. A few-mode optical multimode fiber was prepared to test our proposed measurement technique. We have also compared the OFDR measurement results with those obtained using a traditional time-domain measurement method.
Bad data detection in two stage estimation using phasor measurements
NASA Astrophysics Data System (ADS)
Tarali, Aditya
The ability of the Phasor Measurement Unit (PMU) to directly measure the system state, has led to steady increase in the use of PMU in the past decade. However, in spite of its high accuracy and the ability to measure the states directly, they cannot completely replace the conventional measurement units due to high cost. Hence it is necessary for the modern estimators to use both conventional and phasor measurements together. This thesis presents an alternative method to incorporate the new PMU measurements into the existing state estimator in a systematic manner such that no major modification is necessary to the existing algorithm. It is also shown that if PMUs are placed appropriately, the phasor measurements can be used to detect and identify the bad data associated with critical measurements by using this model, which cannot be detected by conventional state estimation algorithm. The developed model is tested on IEEE 14, IEEE 30 and IEEE 118 bus under various conditions.
Yan, Liang; Zhu, Bo; Jiao, Zongxia; Chen, Chin-Yin; Chen, I-Ming
2014-01-01
An orientation measurement method based on Hall-effect sensors is proposed for permanent magnet (PM) spherical actuators with three-dimensional (3D) magnet array. As there is no contact between the measurement system and the rotor, this method could effectively avoid friction torque and additional inertial moment existing in conventional approaches. Curved surface fitting method based on exponential approximation is proposed to formulate the magnetic field distribution in 3D space. The comparison with conventional modeling method shows that it helps to improve the model accuracy. The Hall-effect sensors are distributed around the rotor with PM poles to detect the flux density at different points, and thus the rotor orientation can be computed from the measured results and analytical models. Experiments have been conducted on the developed research prototype of the spherical actuator to validate the accuracy of the analytical equations relating the rotor orientation and the value of magnetic flux density. The experimental results show that the proposed method can measure the rotor orientation precisely, and the measurement accuracy could be improved by the novel 3D magnet array. The study result could be used for real-time motion control of PM spherical actuators. PMID:25342000
Intraoperative panoramic image using alignment grid, is it accurate?
Apivatthakakul, T; Duanghakrung, M; Luevitoonvechkit, S; Patumasutra, S
2013-07-01
Minimally invasive orthopedic trauma surgery relies heavily on intraoperative fluoroscopic images to evaluate the quality of fracture reduction and fixation. However, fluoroscopic images have a narrow field of view and often cannot visualize the entire long bone axis. To compare the coronal femoral alignment between conventional X-rays to that achieved with a new method of acquiring a panoramic intraoperative image. Twenty-four cadaveric femurs with simple diaphyseal fractures were fixed with an angulated broad DCP to create coronal plane malalignment. An intraoperative alignment grid was used to help stitch different fluoroscopic images together to produce a panoramic image. A conventional X-ray of the entire femur was then performed. The coronal plane angulation in the panoramic images was then compared to the conventional X-rays using a Wilcoxon signed rank test. The mean angle measured from the panoramic view was 173.9° (range 169.3°-178.0°) with median of 173.2°. The mean angle measured from the conventional X-ray was 173.4° (range 167.7°-178.7°) with a median angle of 173.5°. There was no significant difference between both methods of measurement (P = 0.48). Panoramic images produced by stitching fluoroscopic images together with help of an alignment grid demonstrated the same accuracy at evaluating the coronal plane alignment of femur fractures as conventional X-rays.
High-precision Non-Contact Measurement of Creep of Ultra-High Temperature Materials for Aerospace
NASA Technical Reports Server (NTRS)
Rogers, Jan R.; Hyers, Robert
2008-01-01
For high-temperature applications (greater than 2,000 C) such as solid rocket motors, hypersonic aircraft, nuclear electric/thermal propulsion for spacecraft, and more efficient jet engines, creep becomes one of the most important design factors to be considered. Conventional creep-testing methods, where the specimen and test apparatus are in contact with each other, are limited to temperatures approximately 1,700 C. Development of alloys for higher-temperature applications is limited by the availability of testing methods at temperatures above 2000 C. Development of alloys for applications requiring a long service life at temperatures as low as 1500 C, such as the next generation of jet turbine superalloys, is limited by the difficulty of accelerated testing at temperatures above 1700 C. For these reasons, a new, non-contact creep-measurement technique is needed for higher temperature applications. A new non-contact method for creep measurements of ultra-high-temperature metals and ceramics has been developed and validated. Using the electrostatic levitation (ESL) facility at NASA Marshall Space Flight Center, a spherical sample is rotated quickly enough to cause creep deformation due to centrifugal acceleration. Very accurate measurement of the deformed shape through digital image analysis allows the stress exponent n to be determined very precisely from a single test, rather than from numerous conventional tests. Validation tests on single-crystal niobium spheres showed excellent agreement with conventional tests at 1985 C; however the non-contact method provides much greater precision while using only about 40 milligrams of material. This method is being applied to materials including metals and ceramics for non-eroding throats in solid rockets and next-generation superalloys for turbine engines. Recent advances in the method and the current state of these new measurements will be presented.
Pickup, William; Bremer, Phil; Peng, Mei
2018-03-01
The extensive time and cost associated with conventional sensory profiling methods has spurred sensory researchers to develop rapid method alternatives, such as Napping® with Ultra-Flash Profiling (UFP). Napping®-UFP generates sensory maps by requiring untrained panellists to separate samples based on perceived sensory similarities. Evaluations of this method have been restrained to manufactured/formulated food models, and predominantly structured on comparisons against the conventional descriptive method. The present study aims to extend the validation of Napping®-UFP (N = 72) to natural biological products; and to evaluate this method against Descriptive Analysis (DA; N = 8) with physiochemical measurements as an additional evaluative criterion. The results revealed that sample configurations generated by DA and Napping®-UFP were not significantly correlated (RV = 0.425, P = 0.077); however, they were both correlated with the product map generated based on the instrumental measures (P < 0.05). The finding also noted that sample characterisations from DA and Napping®-UFP were driven by different sensory attributes, indicating potential structural differences between these two methods in configuring samples. Overall, these findings lent support for the extended use of Napping®-UFP for evaluations of natural biological products. Although DA was shown to be a better method for establishing sensory-instrumental relationships, Napping®-UFP exhibited strengths in generating informative sample configurations based on holistic perception of products. © 2017 Society of Chemical Industry. © 2017 Society of Chemical Industry.
Surface acoustical intensity measurements on a diesel engine
NASA Technical Reports Server (NTRS)
Mcgary, M. C.; Crocker, M. J.
1980-01-01
The use of surface intensity measurements as an alternative to the conventional selective wrapping technique of noise source identification and ranking on diesel engines was investigated. A six cylinder, in line turbocharged, 350 horsepower diesel engine was used. Sound power was measured under anechoic conditions for eight separate parts of the engine at steady state operating conditions using the conventional technique. Sound power measurements were repeated on five separate parts of the engine using the surface intensity at the same steady state operating conditions. The results were compared by plotting sound power level against frequency and noise source rankings for the two methods.
SU-E-T-293: Simplifying Assumption for Determining Sc and Sp
DOE Office of Scientific and Technical Information (OSTI.GOV)
King, R; Cheung, A; Anderson, R
Purpose: Scp(mlc,jaw) is a two-dimensional function of collimator field size and effective field size. Conventionally, Scp(mlc,jaw) is treated as separable into components Sc(jaw) and Sp(mlc). Scp(mlc=jaw) is measured in phantom and Sc(jaw) is measured in air with Sp=Scp/Sc. Ideally, Sc and Sp would be able to predict measured values of Scp(mlc,jaw) for all combinations of mlc and jaw. However, ideal Sc and Sp functions do not exist and a measured two-dimensional Scp dataset cannot be decomposed into a unique pair of one-dimensional functions.If the output functions Sc(jaw) and Sp(mlc) were equal to each other and thus each equal to Scp(mlc=jaw){supmore » 0.5}, this condition would lead to a simpler measurement process by eliminating the need for in-air measurements. Without the distorting effect of the buildup-cap, small-field measurement would be limited only by the dimensions of the detector and would thus be improved by this simplification of the output functions. The goal of the present study is to evaluate an assumption that Sc=Sp. Methods: For a 6 MV x-ray beam, Sc and Sp were determined both by the conventional method and as Scp(mlc=jaw){sup 0.5}. Square field benchmark values of Scp(mlc,jaw) were then measured across the range from 2×2 to 29×29. Both Sc and Sp functions were then evaluated as to their ability to predict these measurements. Results: Both methods produced qualitatively similar results with <4% error for all cases and >3% error in 1 case. The conventional method produced 2 cases with >2% error, while the squareroot method produced only 1 such case. Conclusion: Though it would need to be validated for any specific beam to which it might be applied, under the conditions studied, the simplifying assumption that Sc = Sp is justified.« less
Feasibility of ballistic strengthening exercises in neurologic rehabilitation.
Williams, Gavin; Clark, Ross A; Hansson, Jessica; Paterson, Kade
2014-09-01
Conventional methods for strength training in neurologic rehabilitation are not task specific for walking. Ballistic strength training was developed to improve the functional transfer of strength training; however, no research has investigated this in neurologic populations. The aim of this pilot study was to evaluate the feasibility of applying ballistic principles to conventional leg strengthening exercises in individuals with mobility limitations as a result of neurologic injuries. Eleven individuals with neurologic injuries completed seated and reclined leg press using conventional and ballistic techniques. A 2 × 2 repeated-measures analysis of variance was used to compare power measures (peak movement height and peak velocity) between exercises and conditions. Peak jump velocity and peak jump height were greater when using the ballistic jump technique rather than the conventional concentric technique (P < 0.01). These findings suggest that when compared with conventional strengthening exercises, the incorporation of ballistic principles was associated with increased peak height and peak velocities.
van IJsseldijk, E A; Valstar, E R; Stoel, B C; Nelissen, R G H H; Baka, N; Van't Klooster, R; Kaptein, B L
2016-08-01
An important measure for the diagnosis and monitoring of knee osteoarthritis is the minimum joint space width (mJSW). This requires accurate alignment of the x-ray beam with the tibial plateau, which may not be accomplished in practice. We investigate the feasibility of a new mJSW measurement method from stereo radiographs using 3D statistical shape models (SSM) and evaluate its sensitivity to changes in the mJSW and its robustness to variations in patient positioning and bone geometry. A validation study was performed using five cadaver specimens. The actual mJSW was varied and images were acquired with variation in the cadaver positioning. For comparison purposes, the mJSW was also assessed from plain radiographs. To study the influence of SSM model accuracy, the 3D mJSW measurement was repeated with models from the actual bones, obtained from CT scans. The SSM-based measurement method was more robust (consistent output for a wide range of input data/consistent output under varying measurement circumstances) than the conventional 2D method, showing that the 3D reconstruction indeed reduces the influence of patient positioning. However, the SSM-based method showed comparable sensitivity to changes in the mJSW with respect to the conventional method. The CT-based measurement was more accurate than the SSM-based measurement (smallest detectable differences 0.55 mm versus 0. 82 mm, respectively). The proposed measurement method is not a substitute for the conventional 2D measurement due to limitations in the SSM model accuracy. However, further improvement of the model accuracy and optimisation technique can be obtained. Combined with the promising options for applications using quantitative information on bone morphology, SSM based 3D reconstructions of natural knees are attractive for further development.Cite this article: E. A. van IJsseldijk, E. R. Valstar, B. C. Stoel, R. G. H. H. Nelissen, N. Baka, R. van't Klooster, B. L. Kaptein. Three dimensional measurement of minimum joint space width in the knee from stereo radiographs using statistical shape models. Bone Joint Res 2016;320-327. DOI: 10.1302/2046-3758.58.2000626. © 2016 van IJsseldijk et al.
Farida, Abesi; Maryam, Ehsani; Ali, Mirzapour; Ehsan, Moudi; Sajad, Yousefi; Soraya, Khafri
2013-01-01
Obtaining a correct working length is necessary for successful root canal treatment. The aim of this study was to compare conventional and digital radiography in measuring root canal working length. In this in vitro study 20 mesio buccal canal from maxillary first molars with moderate and severe curvature and 20 canal form anterior teeth with mild curvature were chosen and their working length were measured with number 15 k file (Maillefer, DENTSPLY, Germany). Then for each canal five radiographies were taken, three conventional radiographies using three methods of processing: Manual, automatic, and monobath solution; in addition to two other digital radiographies using CCD and PSP receptors. Two independent observers measured working length in each technique. Finally, the mean of working length in each group was compared with real working length using a paired T-test. Also a one-way ANOVA test was used for comparing the two groups. The level of statistical significance was P < 0.05. The results have shown that there was a high interobserver agreement on the measurements of the working length in conventional and digital radiography (P ≤ 0.001). Also there was no significant difference between conventional and digital radiography in measuring working length (P > 0.05). Therefore it was concluded that the accuracy of digital radiography is comparable with conventional radiography in measuring working length, so considering the advantages of the digital radiography, it can be used for working length determination.
Yasukawa, Keiko; Shimosawa, Tatsuo; Okubo, Shigeo; Yatomi, Yutaka
2018-01-01
Background Human mercaptalbumin and human non-mercaptalbumin have been reported as markers for various pathological conditions, such as kidney and liver diseases. These markers play important roles in redox regulations throughout the body. Despite the recognition of these markers in various pathophysiologic conditions, the measurements of human mercaptalbumin and non-mercaptalbumin have not been popular because of the technical complexity and long measurement time of conventional methods. Methods Based on previous reports, we explored the optimal analytical conditions for a high-performance liquid chromatography method using an anion-exchange column packed with a hydrophilic polyvinyl alcohol gel. The method was then validated using performance tests as well as measurements of various patients' serum samples. Results We successfully established a reliable high-performance liquid chromatography method with an analytical time of only 12 min per test. The repeatability (within-day variability) and reproducibility (day-to-day variability) were 0.30% and 0.27% (CV), respectively. A very good correlation was obtained with the results of the conventional method. Conclusions A practical method for the clinical measurement of human mercaptalbumin and non-mercaptalbumin was established. This high-performance liquid chromatography method is expected to be a powerful tool enabling the expansion of clinical usefulness and ensuring the elucidation of the roles of albumin in redox reactions throughout the human body.
A rapid leaf-disc sampler for psychrometric water potential measurements.
Wullschleger, S D; Oosterhuis, D M
1986-06-01
An instrument was designed which facilitates faster and more accurate sampling of leaf discs for psychrometric water potential measurements. The instrument consists of an aluminum housing, a spring-loaded plunger, and a modified brass-plated cork borer. The leaf-disc sampler was compared with the conventional method of sampling discs for measurement of leaf water potential with thermocouple psychrometers on a range of plant material including Gossypium hirsutum L., Zea mays L., and Begonia rex-cultorum L. The new sampler permitted a leaf disc to be excised and inserted into the psychrometer sample chamber in less than 7 seconds, which was more than twice as fast as the conventional method. This resulted in more accurate determinations of leaf water potential due to reduced evaporative water losses. The leaf-disc sampler also significantly reduced sample variability between individual measurements. This instrument can be used for many other laboratory and field measurements that necessitate leaf disc sampling.
Accuracy of complete-arch dental impressions: a new method of measuring trueness and precision.
Ender, Andreas; Mehl, Albert
2013-02-01
A new approach to both 3-dimensional (3D) trueness and precision is necessary to assess the accuracy of intraoral digital impressions and compare them to conventionally acquired impressions. The purpose of this in vitro study was to evaluate whether a new reference scanner is capable of measuring conventional and digital intraoral complete-arch impressions for 3D accuracy. A steel reference dentate model was fabricated and measured with a reference scanner (digital reference model). Conventional impressions were made from the reference model, poured with Type IV dental stone, scanned with the reference scanner, and exported as digital models. Additionally, digital impressions of the reference model were made and the digital models were exported. Precision was measured by superimposing the digital models within each group. Superimposing the digital models on the digital reference model assessed the trueness of each impression method. Statistical significance was assessed with an independent sample t test (α=.05). The reference scanner delivered high accuracy over the entire dental arch with a precision of 1.6 ±0.6 µm and a trueness of 5.3 ±1.1 µm. Conventional impressions showed significantly higher precision (12.5 ±2.5 µm) and trueness values (20.4 ±2.2 µm) with small deviations in the second molar region (P<.001). Digital impressions were significantly less accurate with a precision of 32.4 ±9.6 µm and a trueness of 58.6 ±15.8µm (P<.001). More systematic deviations of the digital models were visible across the entire dental arch. The new reference scanner is capable of measuring the precision and trueness of both digital and conventional complete-arch impressions. The digital impression is less accurate and shows a different pattern of deviation than the conventional impression. Copyright © 2013 The Editorial Council of the Journal of Prosthetic Dentistry. Published by Mosby, Inc. All rights reserved.
Kim, Eok Bong; Lee, Jae-hwan; Trung, Luu Tran; Lee, Wong-Kyu; Yu, Dai-Hyuk; Ryu, Han Young; Nam, Chang Hee; Park, Chang Yong
2009-11-09
We developed an optical frequency synthesizer (OFS) with the carrier-envelope-offset frequency locked to 0 Hz achieved using the "direct locking method." This method differs from a conventional phaselock method in that the interference signal from a self-referencing f-2f interferometer is directly fed back to the carrier-envelope-phase control of a femtosecond laser in the time domain. A comparison of the optical frequency of the new OFS to that of a conventional OFS stabilized by a phase-lock method showed that the frequency comb of the new OFS was not different to that of the conventional OFS within an uncertainty of 5.68x10(-16). As a practical application of this OFS, we measured the absolute frequency of an acetylene-stabilized diode laser serving as an optical frequency standard in optical communications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pandian, Muthu Senthil, E-mail: senthilpandianm@ssn.edu.in; Sivasubramani, V.; Ramasamy, P.
2015-06-24
A transparent uniaxial L-arginine 4-nitrophenolate 4-nitrophenol dehydrate (LAPP) single crystal having dimension of 20 mm diameter and 45 mm length was grown by Sankaranarayanan-Ramasamy (SR) method with a growth rate of 1 mm per day. Using an identical solution the conventional crystal grown to a dimension of 8×5×5 mm{sup 3} was obtained over a period of 30 days. The crystal structure has been confirmed by single crystal X-ray diffraction measurement. The crystalline perfection of LAPP crystals grown by slow evaporation solution technique (SEST) and SR method were characterized using Vickers microhardness, UV-Vis NIR, chemical etching, dark and photo current measurements. The above study indicatesmore » that the crystal quality of the Sankaranarayanan-Ramasamy (SR) method grown LAPP is good compared to the conventional method grown crystal.« less
NASA Astrophysics Data System (ADS)
Miyachi, Yukiya; Arakawa, Mototaka; Kanai, Hiroshi
2018-07-01
In our studies on ultrasonic elasticity assessment, minute change in the thickness of the arterial wall was measured by the phased-tracking method. However, most images in carotid artery examinations contain multiple-reflection noise, making it difficult to evaluate arterial wall elasticity precisely. In the present study, a modified phased-tracking method using the pulse inversion method was examined to reduce the influence of the multiple-reflection noise. Moreover, aliasing in the harmonic components was corrected by the fundamental components. The conventional and proposed methods were applied to a pulsated tube phantom mimicking the arterial wall. For the conventional method, the elasticity was 298 kPa without multiple-reflection noise and 353 kPa with multiple-reflection noise on the posterior wall. That of the proposed method was 302 kPa without multiple-reflection noise and 297 kPa with multiple-reflection noise on the posterior wall. Therefore, the proposed method was very robust against multiple-reflection noise.
Development of ocular viscosity characterization method.
Shu-Hao Lu; Guo-Zhen Chen; Leung, Stanley Y Y; Lam, David C C
2016-08-01
Glaucoma is the second leading cause for blindness. Irreversible and progressive optic nerve damage results when the intraocular pressure (IOP) exceeds 21 mmHg. The elevated IOP is attributed to blocked fluid drainage from the eye. Methods to measure the IOP are widely available, but methods to measure the viscous response to blocked drainage has yet been developed. An indentation method to characterize the ocular flow is developed in this study. Analysis of the load-relaxation data from indentation tests on drainage-controlled porcine eyes showed that the blocked drainage is correlated with increases in ocular viscosity. Successful correlation of the ocular viscosity with drainage suggests that ocular viscosity maybe further developed as a new diagnostic parameter for assessment of normal tension glaucoma where nerve damage occurs without noticeable IOP elevation; and as a diagnostic parameter complimentary to conventional IOP in conventional diagnosis.
NASA Astrophysics Data System (ADS)
Kaburaki, Kaori; Mozumi, Michiya; Hasegawa, Hideyuki
2018-07-01
Methods for the estimation of two-dimensional (2D) velocity and displacement of physiological tissues are necessary for quantitative diagnosis. In echocardiography with a phased array probe, the accuracy in the estimation of the lateral motion is lower than that of the axial motion. To improve the accuracy in the estimation of the lateral motion, in the present study, the coordinate system for ultrasonic beamforming was changed from the conventional polar coordinate to the Cartesian coordinate. In a basic experiment, the motion velocity of a phantom, which was moved at a constant speed, was estimated by the conventional and proposed methods. The proposed method reduced the bias error and standard deviation in the estimated motion velocities. In an in vivo measurement, intracardiac blood flow was analyzed by the proposed method.
A Comparison of Spatial Statistical Methods in a School Finance Policy Context
ERIC Educational Resources Information Center
Slagle, Mike
2010-01-01
A shortcoming of the conventional ordinary least squares (OLS) approaches for estimating median voter models of education demand is the inability to more fully explain the spatial relationships between neighboring school districts. Consequently, two school districts that appear to be descriptively similar in terms of conventional measures of…
NASA Technical Reports Server (NTRS)
Ho, K. K.; Moody, G. B.; Peng, C. K.; Mietus, J. E.; Larson, M. G.; Levy, D.; Goldberger, A. L.
1997-01-01
BACKGROUND: Despite much recent interest in quantification of heart rate variability (HRV), the prognostic value of conventional measures of HRV and of newer indices based on nonlinear dynamics is not universally accepted. METHODS AND RESULTS: We have designed algorithms for analyzing ambulatory ECG recordings and measuring HRV without human intervention, using robust methods for obtaining time-domain measures (mean and SD of heart rate), frequency-domain measures (power in the bands of 0.001 to 0.01 Hz [VLF], 0.01 to 0.15 Hz [LF], and 0.15 to 0.5 Hz [HF] and total spectral power [TP] over all three of these bands), and measures based on nonlinear dynamics (approximate entropy [ApEn], a measure of complexity, and detrended fluctuation analysis [DFA], a measure of long-term correlations). The study population consisted of chronic congestive heart failure (CHF) case patients and sex- and age-matched control subjects in the Framingham Heart Study. After exclusion of technically inadequate studies and those with atrial fibrillation, we used these algorithms to study HRV in 2-hour ambulatory ECG recordings of 69 participants (mean age, 71.7+/-8.1 years). By use of separate Cox proportional-hazards models, the conventional measures SD (P<.01), LF (P<.01), VLF (P<.05), and TP (P<.01) and the nonlinear measure DFA (P<.05) were predictors of survival over a mean follow-up period of 1.9 years; other measures, including ApEn (P>.3), were not. In multivariable models, DFA was of borderline predictive significance (P=.06) after adjustment for the diagnosis of CHF and SD. CONCLUSIONS: These results demonstrate that HRV analysis of ambulatory ECG recordings based on fully automated methods can have prognostic value in a population-based study and that nonlinear HRV indices may contribute prognostic value to complement traditional HRV measures.
Fluence-based and microdosimetric event-based methods for radiation protection in space
NASA Technical Reports Server (NTRS)
Curtis, Stanley B.; Meinhold, C. B. (Principal Investigator)
2002-01-01
The National Council on Radiation Protection and Measurements (NCRP) has recently published a report (Report #137) that discusses various aspects of the concepts used in radiation protection and the difficulties in measuring the radiation environment in spacecraft for the estimation of radiation risk to space travelers. Two novel dosimetric methodologies, fluence-based and microdosimetric event-based methods, are discussed and evaluated, along with the more conventional quality factor/LET method. It was concluded that for the present, any reason to switch to a new methodology is not compelling. It is suggested that because of certain drawbacks in the presently-used conventional method, these alternative methodologies should be kept in mind. As new data become available and dosimetric techniques become more refined, the question should be revisited and that in the future, significant improvement might be realized. In addition, such concepts as equivalent dose and organ dose equivalent are discussed and various problems regarding the measurement/estimation of these quantities are presented.
USDA-ARS?s Scientific Manuscript database
The doubly labeled water method is considered the reference method to measure energy expenditure. Conventional mass spectrometry requires a separate aliquot of the same sample to be prepared and analyzed separately. With continuous-flow isotope-ratio mass spectrometry, the same sample could be analy...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tejabhiram, Y., E-mail: tejabhiram@gmail.com; Pradeep, R.; Helen, A.T.
2014-12-15
Highlights: • Novel low temperature synthesis of nickel ferrite nanoparticles. • Comparison with two conventional synthesis techniques including hydrothermal method. • XRD results confirm the formation of crystalline nickel ferrites at 110 °C. • Superparamagnetic particles with applications in drug delivery and hyperthermia. • Magnetic properties superior to conventional methods found in new process. - Abstract: We report a simple, low temperature and surfactant free co-precipitation method for the preparation of nickel ferrite nanostructures using ferrous sulfate as the iron precursor. The products obtained from this method were compared for their physical properties with nickel ferrites produced through conventional co-precipitationmore » and hydrothermal methods which used ferric nitrate as the iron precursor. X-ray diffraction analysis confirmed the synthesis of single phase inverse spinel nanocrystalline nickel ferrites at temperature as low as 110 °C in the low temperature method. Electron microscopy analysis on the samples revealed the formation of nearly spherical nanostructures in the size range of 20–30 nm which are comparable to other conventional methods. Vibrating sample magnetometer measurements showed the formation of superparamagnetic particles with high magnetic saturation 41.3 emu/g which corresponds well with conventional synthesis methods. The spontaneous synthesis of the nickel ferrite nanoparticles by the low temperature synthesis method was attributed to the presence of 0.808 kJ mol{sup −1} of excess Gibbs free energy due to ferrous sulfate precursor.« less
Zhao, Anbang; Ma, Lin; Ma, Xuefei; Hui, Juan
2017-02-20
In this paper, an improved azimuth angle estimation method with a single acoustic vector sensor (AVS) is proposed based on matched filtering theory. The proposed method is mainly applied in an active sonar detection system. According to the conventional passive method based on complex acoustic intensity measurement, the mathematical and physical model of this proposed method is described in detail. The computer simulation and lake experiments results indicate that this method can realize the azimuth angle estimation with high precision by using only a single AVS. Compared with the conventional method, the proposed method achieves better estimation performance. Moreover, the proposed method does not require complex operations in frequencydomain and achieves computational complexity reduction.
Marschollek, Michael; Rehwald, Anja; Wolf, Klaus-Hendrik; Gietzelt, Matthias; Nemitz, Gerhard; zu Schwabedissen, Hubertus Meyer; Schulze, Mareike
2011-06-28
Fall events contribute significantly to mortality, morbidity and costs in our ageing population. In order to identify persons at risk and to target preventive measures, many scores and assessment tools have been developed. These often require expertise and are costly to implement. Recent research investigates the use of wearable inertial sensors to provide objective data on motion features which can be used to assess individual fall risk automatically. So far it is unknown how well this new method performs in comparison with conventional fall risk assessment tools. The aim of our research is to compare the predictive performance of our new sensor-based method with conventional and established methods, based on prospective data. In a first study phase, 119 inpatients of a geriatric clinic took part in motion measurements using a wireless triaxial accelerometer during a Timed Up&Go (TUG) test and a 20 m walk. Furthermore, the St. Thomas Risk Assessment Tool in Falling Elderly Inpatients (STRATIFY) was performed, and the multidisciplinary geriatric care team estimated the patients' fall risk. In a second follow-up phase of the study, 46 of the participants were interviewed after one year, including a fall and activity assessment. The predictive performances of the TUG, the STRATIFY and team scores are compared. Furthermore, two automatically induced logistic regression models based on conventional clinical and assessment data (CONV) as well as sensor data (SENSOR) are matched. Among the risk assessment scores, the geriatric team score (sensitivity 56%, specificity 80%) outperforms STRATIFY and TUG. The induced logistic regression models CONV and SENSOR achieve similar performance values (sensitivity 68%/58%, specificity 74%/78%, AUC 0.74/0.72, +LR 2.64/2.61). Both models are able to identify more persons at risk than the simple scores. Sensor-based objective measurements of motion parameters in geriatric patients can be used to assess individual fall risk, and our prediction model's performance matches that of a model based on conventional clinical and assessment data. Sensor-based measurements using a small wearable device may contribute significant information to conventional methods and are feasible in an unsupervised setting. More prospective research is needed to assess the cost-benefit relation of our approach.
Evaluation of counting methods for oceanic radium-228
NASA Astrophysics Data System (ADS)
Orr, James C.
1988-07-01
Measurement of open ocean 228Ra is difficult, typically requiring at least 200 L of seawater. The burden of collecting and processing these large-volume samples severely limits the widespread use of this promising tracer. To use smaller-volume samples, a more sensitive means of analysis is required. To seek out new and improved counting method(s), conventional 228Ra counting methods have been compared with some promising techniques which are currently used for other radionuclides. Of the conventional methods, α spectrometry possesses the highest efficiency (3-9%) and lowest background (0.0015 cpm), but it suffers from the need for complex chemical processing after sampling and the need to allow about 1 year for adequate ingrowth of 228Th granddaughter. The other two conventional counting methods measure the short-lived 228Ac daughter while it remains supported by 228Ra, thereby avoiding the complex sample processing and the long delay before counting. The first of these, high-resolution γ spectrometry, offers the simplest processing and an efficiency (4.8%) comparable to α spectrometry; yet its high background (0.16 cpm) and substantial equipment cost (˜30,000) limit its widespread use. The second no-wait method, β-γ coincidence spectrometry, also offers comparable efficiency (5.3%), but it possesses both lower background (0.0054 cpm) and lower initial cost (˜12,000). Three new (i.e., untried for 228Ra) techniques all seem to promise about a fivefold increase in efficiency over conventional methods. By employing liquid scintillation methods, both α spectrometry and β-γ coincidence spectrometry can improve their counter efficiency while retaining low background. The third new 228Ra counting method could be adapted from a technique which measures 224Ra by 220Rn emanation. After allowing for ingrowth and then counting for the 224Ra great-granddaughter, 228Ra could be back calculated, thereby yielding a method with high efficiency, where no sample processing is required. The efficiency and background of each of the three new methods have been estimated and are compared with those of the three methods currently employed to measure oceanic 228Ra. From efficiency and background, the relative figure of merit and the detection limit have been determined for each of the six counters. These data suggest that the new counting methods have the potential to measure most 228Ra samples with just 30 L of seawater, to better than 5% precision. Not only would this reduce the time, effort, and expense involved in sample collection, but 228Ra could then be measured on many small-volume samples (20-30 L) previously collected with only 226Ra in mind. By measuring 228Ra quantitatively on such small-volume samples, three analyses (large-volume 228Ra, large-volume 226Ra, and small-volume 226Ra) could be reduced to one, thereby dramatically improving analytical precision.
Cheah, A K W; Kangkorn, T; Tan, E H; Loo, M L; Chong, S J
2018-01-01
Accurate total body surface area burned (TBSAB) estimation is a crucial aspect of early burn management. It helps guide resuscitation and is essential in the calculation of fluid requirements. Conventional methods of estimation can often lead to large discrepancies in burn percentage estimation. We aim to compare a new method of TBSAB estimation using a three-dimensional smart-phone application named 3D Burn Resuscitation (3D Burn) against conventional methods of estimation-Rule of Palm, Rule of Nines and the Lund and Browder chart. Three volunteer subjects were moulaged with simulated burn injuries of 25%, 30% and 35% total body surface area (TBSA), respectively. Various healthcare workers were invited to use both the 3D Burn application as well as the conventional methods stated above to estimate the volunteer subjects' burn percentages. Collective relative estimations across the groups showed that when used, the Rule of Palm, Rule of Nines and the Lund and Browder chart all over-estimated burns area by an average of 10.6%, 19.7%, and 8.3% TBSA, respectively, while the 3D Burn application under-estimated burns by an average of 1.9%. There was a statistically significant difference between the 3D Burn application estimations versus all three other modalities ( p < 0.05). Time of using the application was found to be significantly longer than traditional methods of estimation. The 3D Burn application, although slower, allowed more accurate TBSAB measurements when compared to conventional methods. The validation study has shown that the 3D Burn application is useful in improving the accuracy of TBSAB measurement. Further studies are warranted, and there are plans to repeat the above study in a different centre overseas as part of a multi-centre study, with a view of progressing to a prospective study that compares the accuracy of the 3D Burn application against conventional methods on actual burn patients.
Shack-Hartmann wavefront sensor with large dynamic range by adaptive spot search method.
Shinto, Hironobu; Saita, Yusuke; Nomura, Takanori
2016-07-10
A Shack-Hartmann wavefront sensor (SHWFS) that consists of a microlens array and an image sensor has been used to measure the wavefront aberrations of human eyes. However, a conventional SHWFS has finite dynamic range depending on the diameter of the each microlens. The dynamic range cannot be easily expanded without a decrease of the spatial resolution. In this study, an adaptive spot search method to expand the dynamic range of an SHWFS is proposed. In the proposed method, spots are searched with the help of their approximate displacements measured with low spatial resolution and large dynamic range. By the proposed method, a wavefront can be correctly measured even if the spot is beyond the detection area. The adaptive spot search method is realized by using the special microlens array that generates both spots and discriminable patterns. The proposed method enables expanding the dynamic range of an SHWFS with a single shot and short processing time. The performance of the proposed method is compared with that of a conventional SHWFS by optical experiments. Furthermore, the dynamic range of the proposed method is quantitatively evaluated by numerical simulations.
Evaluation of bearing capacity of piles from cone penetration test data.
DOT National Transportation Integrated Search
2007-12-01
A statistical analysis and ranking criteria were used to compare the CPT methods and the conventional alpha design method. Based on the results, the de Ruiter/Beringen and LCPC methods showed the best capability in predicting the measured load carryi...
Wave directional spreading from point field measurements.
McAllister, M L; Venugopal, V; Borthwick, A G L
2017-04-01
Ocean waves have multidirectional components. Most wave measurements are taken at a single point, and so fail to capture information about the relative directions of the wave components directly. Conventional means of directional estimation require a minimum of three concurrent time series of measurements at different spatial locations in order to derive information on local directional wave spreading. Here, the relationship between wave nonlinearity and directionality is utilized to estimate local spreading without the need for multiple concurrent measurements, following Adcock & Taylor (Adcock & Taylor 2009 Proc. R. Soc. A 465 , 3361-3381. (doi:10.1098/rspa.2009.0031)), with the assumption that directional spreading is frequency independent. The method is applied to measurements recorded at the North Alwyn platform in the northern North Sea, and the results compared against estimates of wave spreading by conventional measurement methods and hindcast data. Records containing freak waves were excluded. It is found that the method provides accurate estimates of wave spreading over a range of conditions experienced at North Alwyn, despite the noisy chaotic signals that characterize such ocean wave data. The results provide further confirmation that Adcock and Taylor's method is applicable to metocean data and has considerable future promise as a technique to recover estimates of wave spreading from single point wave measurement devices.
Wave directional spreading from point field measurements
Venugopal, V.; Borthwick, A. G. L.
2017-01-01
Ocean waves have multidirectional components. Most wave measurements are taken at a single point, and so fail to capture information about the relative directions of the wave components directly. Conventional means of directional estimation require a minimum of three concurrent time series of measurements at different spatial locations in order to derive information on local directional wave spreading. Here, the relationship between wave nonlinearity and directionality is utilized to estimate local spreading without the need for multiple concurrent measurements, following Adcock & Taylor (Adcock & Taylor 2009 Proc. R. Soc. A 465, 3361–3381. (doi:10.1098/rspa.2009.0031)), with the assumption that directional spreading is frequency independent. The method is applied to measurements recorded at the North Alwyn platform in the northern North Sea, and the results compared against estimates of wave spreading by conventional measurement methods and hindcast data. Records containing freak waves were excluded. It is found that the method provides accurate estimates of wave spreading over a range of conditions experienced at North Alwyn, despite the noisy chaotic signals that characterize such ocean wave data. The results provide further confirmation that Adcock and Taylor's method is applicable to metocean data and has considerable future promise as a technique to recover estimates of wave spreading from single point wave measurement devices. PMID:28484326
Analysis of International Space Station Materials on MISSE-3 and MISSE-4
NASA Technical Reports Server (NTRS)
Finckenor, Miria M.; Golden, Johnny L.; O'Rourke, Mary Jane
2008-01-01
For high-temperature applications (> 2,000 C) such as solid rocket motors, hypersonic aircraft, nuclear electric/thermal propulsion for spacecraft, and more efficient jet engines, creep becomes one of the most important design factors to be considered. Conventional creep-testing methods, where the specimen and test apparatus are in contact with each other, are limited to temperatures 1,700 deg. C. Development of alloys for higher-temperature applications is limited by the availability of testing methods at temperatures above 2000 C. Development of alloys for applications requiring a long service life at temperatures as low as 1500 C, such as the next generation of jet turbine superalloys, is limited by the difficulty of accelerated testing at temperatures above 1700 0c. For these reasons, a new, non-contact creep-measurement technique is needed for higher temperature applications. A new non-contact method for creep measurements of ultra-high-temperature metals and ceramics has been developed and validated. Using the electrostatic levitation (ESL) facility at NASA Marshall Space Flight Center, a spherical sample is rotated quickly enough to cause creep deformation due to centrifugal acceleration. Very accurate measurement of the deformed shape through digital image analysis allows the stress exponent n to be determined very precisely from a single test, rather than from numerous conventional tests. Validation tests on single-crystal niobium spheres showed excellent agreement with conventional tests at 1985 C; however the non-contact method provides much greater precision while using only about 40 milligrams of material. This method is being applied to materials including metals and ceramics for noneroding throats in solid rockets and next-generation superalloys for turbine engines. Recent advances in the method and the current state of these new measurements will be presented.
In vivo precision of conventional and digital methods of obtaining complete-arch dental impressions.
Ender, Andreas; Attin, Thomas; Mehl, Albert
2016-03-01
Digital impression systems have undergone significant development in recent years, but few studies have investigated the accuracy of the technique in vivo, particularly compared with conventional impression techniques. The purpose of this in vivo study was to investigate the precision of conventional and digital methods for complete-arch impressions. Complete-arch impressions were obtained using 5 conventional (polyether, POE; vinylsiloxanether, VSE; direct scannable vinylsiloxanether, VSES; digitized scannable vinylsiloxanether, VSES-D; and irreversible hydrocolloid, ALG) and 7 digital (CEREC Bluecam, CER; CEREC Omnicam, OC; Cadent iTero, ITE; Lava COS, LAV; Lava True Definition Scanner, T-Def; 3Shape Trios, TRI; and 3Shape Trios Color, TRC) techniques. Impressions were made 3 times each in 5 participants (N=15). The impressions were then compared within and between the test groups. The cast surfaces were measured point-to-point using the signed nearest neighbor method. Precision was calculated from the (90%-10%)/2 percentile value. The precision ranged from 12.3 μm (VSE) to 167.2 μm (ALG), with the highest precision in the VSE and VSES groups. The deviation pattern varied distinctly according to the impression method. Conventional impressions showed the highest accuracy across the complete dental arch in all groups, except for the ALG group. Conventional and digital impression methods differ significantly in the complete-arch accuracy. Digital impression systems had higher local deviations within the complete arch cast; however, they achieve equal and higher precision than some conventional impression materials. Copyright © 2016 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.
Holland, Tanja; Blessing, Daniel; Hellwig, Stephan; Sack, Markus
2013-10-01
Radio frequency impedance spectroscopy (RFIS) is a robust method for the determination of cell biomass during fermentation. RFIS allows non-invasive in-line monitoring of the passive electrical properties of cells in suspension and can distinguish between living and dead cells based on their distinct behavior in an applied radio frequency field. We used continuous in situ RFIS to monitor batch-cultivated plant suspension cell cultures in stirred-tank bioreactors and compared the in-line data to conventional off-line measurements. RFIS-based analysis was more rapid and more accurate than conventional biomass determination, and was sensitive to changes in cell viability. The higher resolution of the in-line measurement revealed subtle changes in cell growth which were not accessible using conventional methods. Thus, RFIS is well suited for correlating such changes with intracellular states and product accumulation, providing unique opportunities for employing systems biotechnology and process analytical technology approaches to increase product yield and quality. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Kohn, J.; Mollin, D. L.; Rosenbach, L. M.
1961-01-01
A new method for the determination of urinary formiminoglutamic acid (FIGLU) using conventional electrophoresis at 200 to 500 v. on cellulose acetate strips is reported. Experience in 166 determinations on 137 patients shows the method to be a simple, practical, and apparently sensitive one for the determination of FIGLU in the urine. Results of the application of the measurement of urinary FIGLU with histidine loading as a test for folic acid deficiency are also reported. Images PMID:13757596
GESFIDE-PROPELLER approach for simultaneous R2 and R2* measurements in the abdomen.
Jin, Ning; Guo, Yang; Zhang, Zhuoli; Zhang, Longjiang; Lu, Guangming; Larson, Andrew C
2013-12-01
To investigate the feasibility of combining GESFIDE with PROPELLER sampling approaches for simultaneous abdominal R2 and R2* mapping. R2 and R2* measurements were performed in 9 healthy volunteers and phantoms using the GESFIDE-PROPELLER and the conventional Cartesian-sampling GESFIDE approaches. Images acquired with the GESFIDE-PROPELLER sequence effectively mitigated the respiratory motion artifacts, which were clearly evident in the images acquired using the conventional GESFIDE approach. There was no significant difference between GESFIDE-PROPELLER and reference MGRE R2* measurements (p=0.162) whereas the Cartesian-sampling based GESFIDE methods significantly overestimated R2* values compared to MGRE measurements (p<0.001). The GESFIDE-PROPELLER sequence provided high quality images and accurate abdominal R2 and R2* maps while avoiding the motion artifacts common to the conventional Cartesian-sampling GESFIDE approaches. © 2013 Elsevier Inc. All rights reserved.
Measuring Gravitation Using Polarization Spectroscopy
NASA Technical Reports Server (NTRS)
Matsko, Andrey; Yu, Nan; Maleki, Lute
2004-01-01
A proposed method of measuring gravitational acceleration would involve the application of polarization spectroscopy to an ultracold, vertically moving cloud of atoms (an atomic fountain). A related proposed method involving measurements of absorption of light pulses like those used in conventional atomic interferometry would yield an estimate of the number of atoms participating in the interferometric interaction. The basis of the first-mentioned proposed method is that the rotation of polarization of light is affected by the acceleration of atoms along the path of propagation of the light. The rotation of polarization is associated with a phase shift: When an atom moving in a laboratory reference interacts with an electromagnetic wave, the energy levels of the atom are Doppler-shifted, relative to where they would be if the atom were stationary. The Doppler shift gives rise to changes in the detuning of the light from the corresponding atomic transitions. This detuning, in turn, causes the electromagnetic wave to undergo a phase shift that can be measured by conventional means. One would infer the gravitational acceleration and/or the gradient of the gravitational acceleration from the phase measurements.
Zhao, H; Stephens, B
2016-08-01
Recent experiments have demonstrated that outdoor ozone reacts with materials inside residential building enclosures, potentially reducing indoor exposures to ozone or altering ozone reaction byproducts. However, test methods to measure ozone penetration factors in residences (P) remain limited. We developed a method to measure ozone penetration factors in residences under infiltration conditions and applied it in an unoccupied apartment unit. Twenty-four repeated measurements were made, and results were explored to (i) evaluate the accuracy and repeatability of the new procedure using multiple solution methods, (ii) compare results from 'interference-free' and conventional UV absorbance ozone monitors, and (iii) compare results against those from a previously published test method requiring artificial depressurization. The mean (±s.d.) estimate of P was 0.54 ± 0.10 across a wide range of conditions using the new method with an interference-free monitor; the conventional monitor was unable to yield meaningful results due to relatively high limits of detection. Estimates of P were not clearly influenced by any indoor or outdoor environmental conditions or changes in indoor decay rate constants. This work represents the first known measurements of ozone penetration factors in a residential building operating under natural infiltration conditions and provides a new method for widespread application in buildings. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
A New Void Fraction Measurement Method for Gas-Liquid Two-Phase Flow in Small Channels
Li, Huajun; Ji, Haifeng; Huang, Zhiyao; Wang, Baoliang; Li, Haiqing; Wu, Guohua
2016-01-01
Based on a laser diode, a 12 × 6 photodiode array sensor, and machine learning techniques, a new void fraction measurement method for gas-liquid two-phase flow in small channels is proposed. To overcome the influence of flow pattern on the void fraction measurement, the flow pattern of the two-phase flow is firstly identified by Fisher Discriminant Analysis (FDA). Then, according to the identification result, a relevant void fraction measurement model which is developed by Support Vector Machine (SVM) is selected to implement the void fraction measurement. A void fraction measurement system for the two-phase flow is developed and experiments are carried out in four different small channels. Four typical flow patterns (including bubble flow, slug flow, stratified flow and annular flow) are investigated. The experimental results show that the development of the measurement system is successful. The proposed void fraction measurement method is effective and the void fraction measurement accuracy is satisfactory. Compared with the conventional laser measurement systems using standard laser sources, the developed measurement system has the advantages of low cost and simple structure. Compared with the conventional void fraction measurement methods, the proposed method overcomes the influence of flow pattern on the void fraction measurement. This work also provides a good example of using low-cost laser diode as a competent replacement of the expensive standard laser source and hence implementing the parameter measurement of gas-liquid two-phase flow. The research results can be a useful reference for other researchers’ works. PMID:26828488
A New Void Fraction Measurement Method for Gas-Liquid Two-Phase Flow in Small Channels.
Li, Huajun; Ji, Haifeng; Huang, Zhiyao; Wang, Baoliang; Li, Haiqing; Wu, Guohua
2016-01-27
Based on a laser diode, a 12 × 6 photodiode array sensor, and machine learning techniques, a new void fraction measurement method for gas-liquid two-phase flow in small channels is proposed. To overcome the influence of flow pattern on the void fraction measurement, the flow pattern of the two-phase flow is firstly identified by Fisher Discriminant Analysis (FDA). Then, according to the identification result, a relevant void fraction measurement model which is developed by Support Vector Machine (SVM) is selected to implement the void fraction measurement. A void fraction measurement system for the two-phase flow is developed and experiments are carried out in four different small channels. Four typical flow patterns (including bubble flow, slug flow, stratified flow and annular flow) are investigated. The experimental results show that the development of the measurement system is successful. The proposed void fraction measurement method is effective and the void fraction measurement accuracy is satisfactory. Compared with the conventional laser measurement systems using standard laser sources, the developed measurement system has the advantages of low cost and simple structure. Compared with the conventional void fraction measurement methods, the proposed method overcomes the influence of flow pattern on the void fraction measurement. This work also provides a good example of using low-cost laser diode as a competent replacement of the expensive standard laser source and hence implementing the parameter measurement of gas-liquid two-phase flow. The research results can be a useful reference for other researchers' works.
A new method for assessing the accuracy of full arch impressions in patients.
Kuhr, F; Schmidt, A; Rehmann, P; Wöstmann, B
2016-12-01
To evaluate a new method of measuring the real deviation (trueness) of full arch impressions intraorally and to investigate the trueness of digital full arch impressions in comparison to a conventional impression procedure in clinical use. Four metal spheres were fixed with composite using a metal application aid to the lower teeth of 50 test subjects as reference structures. One conventional impression (Impregum Penta Soft) with subsequent type-IV gypsum model casting (CI) and three different digital impressions were performed in the lower jaw of each test person with the following intraoral scanners: Sirona CEREC Omnicam (OC), 3M True Definition (TD), Heraeus Cara TRIOS (cT). The digital and conventional (gypsum) models were analyzed relative to the spheres. Linear distance and angle measurements between the spheres, as well as digital superimpositions of the spheres with the reference data set were executed. With regard to the distance measurements, CI showed the smallest deviations followed by intraoral scanners TD, cT and OC. A digital superimposition procedure yielded the same order for the outcomes: CI (15±4μm), TD (23±9μm), cT (37±14μm), OC (214±38μm). Angle measurements revealed the smallest deviation for TD (0.06°±0,07°) followed by CI (0.07°±0.07°), cT (0.13°±0.15°) and OC (0.28°±0.21°). The new measuring method is suitable for measuring the dimensional accuracy of full arch impressions intraorally. CI is still significantly more accurate than full arch scans with intraoral scanners in clinical use. Conventional full arch impressions with polyether impression materials are still more accurate than full arch digital impressions. Digital impression systems using powder application and active wavefront sampling technology achieve the most accurate results in comparison to other intraoral scanning systems (DRKS-ID: DRKS00009360, German Clinical Trials Register). Copyright © 2016 Elsevier Ltd. All rights reserved.
Manufacturing implant supported auricular prostheses by rapid prototyping techniques.
Karatas, Meltem Ozdemir; Cifter, Ebru Demet; Ozenen, Didem Ozdemir; Balik, Ali; Tuncer, Erman Bulent
2011-08-01
Maxillofacial prostheses are usually fabricated on the models obtained following the impression procedures. Disadvantages of conventional impression techniques used in production of facial prosthesis are deformation of soft tissues caused by impression material and disturbance of the patient due to. Additionally production of prosthesis by conventional methods takes longer time. Recently, rapid prototyping techniques have been developed for extraoral prosthesis in order to reduce these disadvantages of conventional methods. Rapid prototyping technique has the potential to simplify the procedure and decrease the laboratory work required. It eliminates the need for measurement impression procedures and preparation of wax model to be performed by prosthodontists themselves In the near future this technology will become a standard for fabricating maxillofacial prostheses.
Ahn, J; Yun, I S; Yoo, H G; Choi, J-J; Lee, M
2017-01-01
Purpose To evaluate a progression-detecting algorithm for a new automated matched alternation flicker (AMAF) in glaucoma patients. Methods Open-angle glaucoma patients with a baseline mean deviation of visual field (VF) test>−6 dB were included in this longitudinal and retrospective study. Functional progression was detected by two VF progression criteria and structural progression by both AMAF and conventional comparison methods using optic disc and retinal nerve fiber layer (RNFL) photography. Progression-detecting performances of AMAF and the conventional method were evaluated by an agreement between functional and structural progression criteria. RNFL thickness changes measured by optical coherence tomography (OCT) were compared between progressing and stable eyes determined by each method. Results Among 103 eyes, 47 (45.6%), 21 (20.4%), and 32 (31.1%) eyes were evaluated as glaucoma progression using AMAF, the conventional method, and guided progression analysis (GPA) of the VF test, respectively. The AMAF showed better agreement than the conventional method, using GPA of the VF test (κ=0.337; P<0.001 and κ=0.124; P=0.191, respectively). The rates of RNFL thickness decay using OCT were significantly different between the progressing and stable eyes when progression was determined by AMAF (−3.49±2.86 μm per year vs −1.83±3.22 μm per year; P=0.007) but not by the conventional method (−3.24±2.42 μm per year vs −2.42±3.33 μm per year; P=0.290). Conclusions The AMAF was better than the conventional comparison method in discriminating structural changes during glaucoma progression, and showed a moderate agreement with functional progression criteria. PMID:27662466
NASA Astrophysics Data System (ADS)
Hagita, Norihiro; Sawaki, Minako
1995-03-01
Most conventional methods in character recognition extract geometrical features such as stroke direction, connectivity of strokes, etc., and compare them with reference patterns in a stored dictionary. Unfortunately, geometrical features are easily degraded by blurs, stains and the graphical background designs used in Japanese newspaper headlines. This noise must be removed before recognition commences, but no preprocessing method is completely accurate. This paper proposes a method for recognizing degraded characters and characters printed on graphical background designs. This method is based on the binary image feature method and uses binary images as features. A new similarity measure, called the complementary similarity measure, is used as a discriminant function. It compares the similarity and dissimilarity of binary patterns with reference dictionary patterns. Experiments are conducted using the standard character database ETL-2 which consists of machine-printed Kanji, Hiragana, Katakana, alphanumeric, an special characters. The results show that this method is much more robust against noise than the conventional geometrical feature method. It also achieves high recognition rates of over 92% for characters with textured foregrounds, over 98% for characters with textured backgrounds, over 98% for outline fonts, and over 99% for reverse contrast characters.
Traceability in hardness measurements: from the definition to industry
NASA Astrophysics Data System (ADS)
Germak, Alessandro; Herrmann, Konrad; Low, Samuel
2010-04-01
The measurement of hardness has been and continues to be of significant importance to many of the world's manufacturing industries. Conventional hardness testing is the most commonly used method for acceptance testing and production quality control of metals and metallic products. Instrumented indentation is one of the few techniques available for obtaining various property values for coatings and electronic products in the micrometre and nanometre dimensional scales. For these industries to be successful, it is critical that measurements made by suppliers and customers agree within some practical limits. To help assure this measurement agreement, a traceability chain for hardness measurement traceability from the hardness definition to industry has developed and evolved over the past 100 years, but its development has been complicated. A hardness measurement value not only requires traceability of force, length and time measurements but also requires traceability of the hardness values measured by the hardness machine. These multiple traceability paths are needed because a hardness measurement is affected by other influence parameters that are often difficult to identify, quantify and correct. This paper describes the current situation of hardness measurement traceability that exists for the conventional hardness methods (i.e. Rockwell, Brinell, Vickers and Knoop hardness) and for special-application hardness and indentation methods (i.e. elastomer, dynamic, portables and instrumented indentation).
NASA Astrophysics Data System (ADS)
Okuyama, Keita; Sasahira, Akira; Noshita, Kenji; Yoshida, Takuma; Kato, Kazuyuki; Nagasaki, Shinya; Ohe, Toshiaki
Experimental effort to evaluate the barrier performance of geologic disposal requires relatively long testing periods and chemically stable conditions. We have developed a new technique, the micro mock-up method, to present a fast and sensitive method to measure both nuclide diffusivity and sorption coefficient within a day to overcome such disadvantage of the conventional method. In this method, a Teflon plate having a micro channel (10-200 μm depth, 2, 4 mm width) is placed just beneath the rock sample plate, radionuclide solution is injected into the channel with constant rate. The breakthrough curve is being measured until a steady state. The outlet flux in the steady state however does not meet the inlet flux because of the matrix diffusion into the rock body. This inlet-outlet difference is simply related to the effective diffusion coefficient ( De) and the distribution coefficient ( Kd) of rock sample. Then, we adopt a fitting procedure to speculate Kd and De values by comparing the observation to the theoretical curve of the two-dimensional diffusion-advection equation. In the present study, we measured De of 3H by using both the micro mock-up method and the conventional through-diffusion method for comparison. The obtained values of De by two different ways for granite sample (Inada area of Japan) were identical: 1.0 × 10 -11 and 9.0 × 10 -12 m 2/s but the testing period was much different: 10 h and 3 days, respectively. We also measured the breakthrough curve of 85Sr and the resulting Kd and De agreed well to the previous study obtained by the batch sorption experiments with crushed samples. The experimental evidence and the above advantages reveal that the micro mock-up method based on the microreactor concept is powerful and much advantageous when compared to the conventional method.
Color-coded visualization of magnetic resonance imaging multiparametric maps
NASA Astrophysics Data System (ADS)
Kather, Jakob Nikolas; Weidner, Anja; Attenberger, Ulrike; Bukschat, Yannick; Weis, Cleo-Aron; Weis, Meike; Schad, Lothar R.; Zöllner, Frank Gerrit
2017-01-01
Multiparametric magnetic resonance imaging (mpMRI) data are emergingly used in the clinic e.g. for the diagnosis of prostate cancer. In contrast to conventional MR imaging data, multiparametric data typically include functional measurements such as diffusion and perfusion imaging sequences. Conventionally, these measurements are visualized with a one-dimensional color scale, allowing only for one-dimensional information to be encoded. Yet, human perception places visual information in a three-dimensional color space. In theory, each dimension of this space can be utilized to encode visual information. We addressed this issue and developed a new method for tri-variate color-coded visualization of mpMRI data sets. We showed the usefulness of our method in a preclinical and in a clinical setting: In imaging data of a rat model of acute kidney injury, the method yielded characteristic visual patterns. In a clinical data set of N = 13 prostate cancer mpMRI data, we assessed diagnostic performance in a blinded study with N = 5 observers. Compared to conventional radiological evaluation, color-coded visualization was comparable in terms of positive and negative predictive values. Thus, we showed that human observers can successfully make use of the novel method. This method can be broadly applied to visualize different types of multivariate MRI data.
NASA Astrophysics Data System (ADS)
Filintas, Agathos, , Dr; Hatzigiannakis, Evagellos, , Dr; Arampatzis, George, , Dr; Ilias, Andreas; Panagopoulos, Andreas, , Dr; Hatzispiroglou, Ioannis
2015-04-01
The aim of the present study is a thorough comparison of hydrometry's conventional and innovative methods-tools for river flow monitoring. A case study was conducted in Stara river at Agios Germanos monitoring station (northwest Greece), in order to investigate possible deviations between conventional and innovative methods-tools on river flow velocity and discharge. For this study, two flowmeters were used, which manufac-tured in 2013 (OTT Messtechnik Gmbh, 2013), as follows: a) A conventional propeller flow velocity meter (OTT-Model C2) which is a me-chanical current flow meter with a certification of calibration BARGO, operated with a rod and a relocating device, along with a digital measuring device including an elec-tronic flow calculator, data logger and real time control display unit. The flowmeter has a measurement velocity range 0.025-4.000 m/s. b) An innovative electromagnetic flowmeter (OTT-Model MF pro) which it is con-sisted of a compact and light-weight sensor and a robust handheld unit. Both system components are designed to be attached to conventional wading rods. The electromag-netic flowmeter uses Faraday's Law of electromagnetic induction to measure the process flow. When an electrically conductive fluid flows along the meter, an electrode voltage is induced between a pair of electrodes placed at right angles to the direction of mag-netic field. The electrode voltage is directly proportional to the average fluid velocity. The electromagnetic flowmeter was operated with a rod and relocating device, along with a digital measuring device with various logging and graphical capabilities and vari-ous methods of velocity measurement (ISO/USGS standards). The flowmeter has a measurement velocity range 0.000-6.000 m/s. The river flow data were averaged over a pair measurement of 60+60 seconds and the measured river water flow velocity, depths and widths of the segments were used for the estimation of cross-section's mean flow velocity in each measured segment. Then it was used the mid-section method for the overall discharge calculation of all segments flow area. The cross-section characteristics, the river flow velocity of segments and the mean water flow velocity and discharge total profile were measured, calculated and an-notated respectively. A series of concurrent conventional and innovative (electromag-netic) flow measurements were performed during 2014. The results and statistical analysis showed that Froude number during the measurement period in all cases was Fr<1 which means that the water flow of the Stara river is classified as subcritical flow. The 12 months' study showed various advantages for the elec-tromagnetic sensor that is virtually maintenance-free because there are no moving parts, no calibration was required in practice, and it can be used even in the lowest water ve-locities from 0.000 m/s. Moreover, based on the concurrent hydromeasurements of the Stara River, on the velocity and discharge modelling and the statistical analysis, it was found that there was not a significant statistical difference (α=0.05) between mean velocity measured with a) conventional and b) electromagnetic method which seems to be more accurate in low velocities where a significant statistical difference was found. Acknowledgments Data in this study are collected in the framework of the elaboration of the national water resources monitoring network, supervised by the Special Secretariat for Water-Hellenic Ministry for the Environment and Climate Change. This project is elaborated in the framework of the operational program "Environment and Sustainable Development" which is co-funded by the National Strategic Reference Framework (NSRF) and the Public Investment Program (PIP).
Feng, Jingwen; Lin, Jie; Zhang, Pengquan; Yang, Songnan; Sa, Yu; Feng, Yuanming
2017-08-29
High-content screening is commonly used in studies of the DNA damage response. The double-strand break (DSB) is one of the most harmful types of DNA damage lesions. The conventional method used to quantify DSBs is γH2AX foci counting, which requires manual adjustment and preset parameters and is usually regarded as imprecise, time-consuming, poorly reproducible, and inaccurate. Therefore, a robust automatic alternative method is highly desired. In this manuscript, we present a new method for quantifying DSBs which involves automatic image cropping, automatic foci-segmentation and fluorescent intensity measurement. Furthermore, an additional function was added for standardizing the measurement of DSB response inhibition based on co-localization analysis. We tested the method with a well-known inhibitor of DSB response. The new method requires only one preset parameter, which effectively minimizes operator-dependent variations. Compared with conventional methods, the new method detected a higher percentage difference of foci formation between different cells, which can improve measurement accuracy. The effects of the inhibitor on DSB response were successfully quantified with the new method (p = 0.000). The advantages of this method in terms of reliability, automation and simplicity show its potential in quantitative fluorescence imaging studies and high-content screening for compounds and factors involved in DSB response.
Apparatus and method for measuring viscosity
Murphy, R.J. Jr.
1986-02-25
The present invention is directed to an apparatus and method for measuring the viscosity of a fluid. This apparatus and method is particularly useful for the measurement of the viscosity of a liquid in a harsh environment characterized by high temperature and the presence of corrosive or deleterious gases and vapors which adversely affect conventional ball or roller bearings. The apparatus and method of the present invention employ one or more flexural or torsional bearings to suspend a bob capable of limited angular motion within a rotatable sleeve suspended from a stationary frame. 7 figs.
Apparatus and method for measuring viscosity
Murphy, Jr., Robert J.
1986-01-01
The present invention is directed to an apparatus and method for measuring the viscosity of a fluid. This apparatus and method is particularly useful for the measurement of the viscosity of a liquid in a harsh environment characterized by high temperature and the presence of corrosive or deleterious gases and vapors which adversely affect conventional ball or roller bearings. The apparatus and method of the present invention employ one or more flexural or torsional bearings to suspend a bob capable of limited angular motion within a rotatable sleeve suspended from a stationary frame.
Improvement of spatial resolution in a Timepix based CdTe photon counting detector using ToT method
NASA Astrophysics Data System (ADS)
Park, Kyeongjin; Lee, Daehee; Lim, Kyung Taek; Kim, Giyoon; Chang, Hojong; Yi, Yun; Cho, Gyuseong
2018-05-01
Photon counting detectors (PCDs) have been recognized as potential candidates in X-ray radiography and computed tomography due to their many advantages over conventional energy-integrating detectors. In particular, a PCD-based X-ray system shows an improved contrast-to-noise ratio, reduced radiation exposure dose, and more importantly, exhibits a capability for material decomposition with energy binning. For some applications, a very high resolution is required, which translates into smaller pixel size. Unfortunately, small pixels may suffer from energy spectral distortions (distortion in energy resolution) due to charge sharing effects (CSEs). In this work, we propose a method for correcting CSEs by measuring the point of interaction of an incident X-ray photon by the time-of-threshold (ToT) method. Moreover, we also show that it is possible to obtain an X-ray image with a reduced pixel size by using the concept of virtual pixels at a given pixel size. To verify the proposed method, modulation transfer function (MTF) and signal-to-noise ratio (SNR) measurements were carried out with the Timepix chip combined with the CdTe pixel sensor. The X-ray test condition was set at 80 kVp with 5 μA, and a tungsten edge phantom and a lead line phantom were used for the measurements. Enhanced spatial resolution was achieved by applying the proposed method when compared to that of the conventional photon counting method. From experiment results, MTF increased from 6.3 (conventional counting method) to 8.3 lp/mm (proposed method) at 0.3 MTF. On the other hand, the SNR decreased from 33.08 to 26.85 dB due to four virtual pixels.
Zhao, Anbang; Ma, Lin; Ma, Xuefei; Hui, Juan
2017-01-01
In this paper, an improved azimuth angle estimation method with a single acoustic vector sensor (AVS) is proposed based on matched filtering theory. The proposed method is mainly applied in an active sonar detection system. According to the conventional passive method based on complex acoustic intensity measurement, the mathematical and physical model of this proposed method is described in detail. The computer simulation and lake experiments results indicate that this method can realize the azimuth angle estimation with high precision by using only a single AVS. Compared with the conventional method, the proposed method achieves better estimation performance. Moreover, the proposed method does not require complex operations in frequency-domain and achieves computational complexity reduction. PMID:28230763
Analysis of fracture in sheet bending and roll forming
NASA Astrophysics Data System (ADS)
Deole, Aditya D.; Barnett, Matthew; Weiss, Matthias
2018-05-01
The bending limit or minimum bending radius of sheet metal is conventionally measured in a wiping (swing arm) or in a vee bend test and reported as the minimum radius of the tool over which the sheet can be bent without fracture. Frequently the material kinks while bending so that the actual inner bend radius of the sheet metal is smaller than the tool radius giving rise to inaccuracy in these methods. It has been shown in the previous studies that conventional bend test methods may under-estimate formability in bending dominated processes such as roll forming. A new test procedure is proposed here to improve understanding and measurement of fracture in bending and roll forming. In this study, conventional wiping test and vee bend test have been performed on martensitic steel to determine the minimum bend radius. In addition, the vee bend test is performed in an Erichsen sheet metal tester equipped with the GOM Aramis system to enable strain measurement on the outer surface during bending. The strain measurement before the onset of fracture is then used to determine the minimum bend radius. To compare this result with a technological process, a vee channel is roll formed and in-situ strain measurement carried out with the Vialux Autogrid system. The strain distribution at fracture in the roll forming process is compared with that predicted by the conventional bending tests and by the improved process. It is shown that for this forming operation and material, the improved procedure gives a more accurate prediction of fracture.
Bratos, Manuel; Bergin, Jumping M; Rubenstein, Jeffrey E; Sorensen, John A
2018-03-17
Conventional impression techniques to obtain a definitive cast for a complete-arch implant-supported prosthesis are technique-sensitive and time-consuming. Direct optical recording with a camera could offer an alternative to conventional impression making. The purpose of this in vitro study was to test a novel intraoral image capture protocol to obtain 3-dimensional (3D) implant spatial measurement data under simulated oral conditions of vertical opening and lip retraction. A mannequin was assembled simulating the intraoral conditions of a patient having an edentulous mandible with 5 interforaminal implants. Simulated mouth openings with 2 interincisal openings (35 mm and 55 mm) and 3 lip retractions (55 mm, 75 mm, and 85 mm) were evaluated to record the implant positions. The 3D spatial orientations of implant replicas embedded in the reference model were measured using a coordinate measuring machine (CMM) (control). Five definitive casts were made with a splinted conventional impression technique of the reference model. The positions of the implant replicas for each of the 5 casts were measured with a Nobel Procera Scanner (conventional digital method). For the prototype, optical targets were secured to the implant replicas, and 3 sets of 12 images each were recorded for the photogrammetric process of 6 groups of retractions and openings using a digital camera and a standardized image capture protocol. Dimensional data were imported into photogrammetry software (photogrammetry method). The calculated and/or measured precision and accuracy of the implant positions in 3D space for the 6 groups were compared with 1-way ANOVA with an F-test (α=.05). The precision (standard error [SE] of measurement) for CMM was 3.9 μm (95% confidence interval [CI] 2.7 to 7.1 μm). For the conventional impression method, the SE of measurement was 17.2 μm (95% CI 10.3 to 49.4 μm). For photogrammetry, a grand mean was calculated for groups MinR-AvgO, MinR-MaxO, AvgR-AvgO, and MaxR-AvgO obtaining a value of 26.8 μm (95% CI 18.1 to 51.4 μm). The overall linear measurement error for accurately locating the top center points (TCP) followed a similar pattern as for precision. CMM (coordinate measurement machine) measurement represents the nonclinical gold standard, with an average error TCP distance of 4.6 μm (95% CI 3.5 to 6 μm). All photogrammetry groups presented an accuracy that ranged from 63 μm (SD 17.6) to 47 μm (SD 9.2). The grand mean of accuracy was calculated as 55.2 μm (95% CI 8.8 to 130.8 μm). The CMM group (control) demonstrated the highest levels of accuracy and precision. Most of the groups with the photogrammetric method were statistically similar to the conventional group except for groups AvgR-MaxO and MaxR-MaxO, which represented maximum opening with average retraction and maximum opening with maximum retraction. Copyright © 2017 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.
Measurement of Crystalline Silica Aerosol Using Quantum Cascade Laser-Based Infrared Spectroscopy.
Wei, Shijun; Kulkarni, Pramod; Ashley, Kevin; Zheng, Lina
2017-10-24
Inhalation exposure to airborne respirable crystalline silica (RCS) poses major health risks in many industrial environments. There is a need for new sensitive instruments and methods for in-field or near real-time measurement of crystalline silica aerosol. The objective of this study was to develop an approach, using quantum cascade laser (QCL)-based infrared spectroscopy (IR), to quantify airborne concentrations of RCS. Three sampling methods were investigated for their potential for effective coupling with QCL-based transmittance measurements: (i) conventional aerosol filter collection, (ii) focused spot sample collection directly from the aerosol phase, and (iii) dried spot obtained from deposition of liquid suspensions. Spectral analysis methods were developed to obtain IR spectra from the collected particulate samples in the range 750-1030 cm -1 . The new instrument was calibrated and the results were compared with standardized methods based on Fourier transform infrared (FTIR) spectrometry. Results show that significantly lower detection limits for RCS (≈330 ng), compared to conventional infrared methods, could be achieved with effective microconcentration and careful coupling of the particulate sample with the QCL beam. These results offer promise for further development of sensitive filter-based laboratory methods and portable sensors for near real-time measurement of crystalline silica aerosol.
Brosius, Nevin; Ward, Kevin; Matsumoto, Satoshi; SanSoucie, Michael; Narayanan, Ranga
2018-01-01
In this work, a method for the measurement of surface tension using continuous periodic forcing is presented. To reduce gravitational effects, samples are electrostatically levitated prior to forcing. The method, called Faraday forcing, is particularly well suited for fluids that require high temperature measurements such as liquid metals where conventional surface tension measurement methods are not possible. It offers distinct advantages over the conventional pulse-decay analysis method when the sample viscosity is high or the levitation feedback control system is noisy. In the current method, levitated drops are continuously translated about a mean position at a small, constant forcing amplitude over a range of frequencies. At a particular frequency in this range, the drop suddenly enters a state of resonance, which is confirmed by large executions of prolate/oblate deformations about the mean spherical shape. The arrival at this resonant condition is a signature that the parametric forcing frequency is equal to the drop's natural frequency, the latter being a known function of surface tension. A description of the experimental procedure is presented. A proof of concept is given using pure Zr and a Ti 39.5 Zr 39.5 Ni 21 alloy as examples. The results compare favorably with accepted literature values obtained using the pulse-decay method.
NASA Astrophysics Data System (ADS)
Zhang, Rui; Xin, Binjie
2016-08-01
Yarn density is always considered as the fundamental structural parameter used for the quality evaluation of woven fabrics. The conventional yarn density measurement method is based on one-side analysis. In this paper, a novel density measurement method is developed for yarn-dyed woven fabrics based on a dual-side fusion technique. Firstly, a lab-used dual-side imaging system is established to acquire both face-side and back-side images of woven fabric and the affine transform is used for the alignment and fusion of the dual-side images. Then, the color images of the woven fabrics are transferred from the RGB to the CIE-Lab color space, and the intensity information of the image extracted from the L component is used for texture fusion and analysis. Subsequently, three image fusion methods are developed and utilized to merge the dual-side images: the weighted average method, wavelet transform method and Laplacian pyramid blending method. The fusion efficacy of each method is evaluated by three evaluation indicators and the best of them is selected to do the reconstruction of the complete fabric texture. Finally, the yarn density of the fused image is measured based on the fast Fourier transform, and the yarn alignment image could be reconstructed using the inverse fast Fourier transform. Our experimental results show that the accuracy of density measurement by using the proposed method is close to 99.44% compared with the traditional method and the robustness of this new proposed method is better than that of conventional analysis methods.
DOT National Transportation Integrated Search
2016-11-28
Intelligent Compaction (IC) is considered to be an innovative technology intended to address some of the problems associated with conventional compaction methods of earthwork (e.g. stiffnessbased measurements instead of density-based measurements). I...
Holographic Refraction and the Measurement of Spherical Ametropia.
Nguyen, Nicholas Hoai Nam
2016-10-01
To evaluate the performance of a holographic logMAR chart for the subjective spherical refraction of the human eye. Bland-Altman analysis was used to assess the level of agreement between subjective spherical refraction using the holographic logMAR chart and conventional autorefraction and subjective spherical refraction. The 95% limits of agreement (LoA) were calculated between holographic refraction and the two standard methods (subjective and autorefraction). Holographic refraction has a lower mean spherical refraction when compared to conventional refraction (LoA 0.11 ± 0.65 D) and when compared to autorefraction (LoA 0.36 ± 0.77 D). After correcting for systemic bias, this is comparable between autorefraction and conventional subjective refraction (LoA 0.45 ± 0.79 D). After correcting for differences in vergence distance and chromatic aberration between holographic and conventional refraction, approximately 65% (group 1) of measurements between holography and conventional subjective refraction were similar (MD = 0.13 D, SD = 0.00 D). The remaining 35% (group 2) had a mean difference of 0.45 D (SD = 0.12 D) between the two subjective methods. Descriptive statistics showed group 2's mean age (21 years, SD = 13 years) was considerably lower than group 1's mean age (41 years, SD = 17), suggesting accommodation may have a role in the greater mean difference of group 2. Overall, holographic refraction has good agreement with conventional refraction and is a viable alternative for spherical subjective refraction. A larger bias between holographic and conventional refraction was found in younger subjects than older subjects, suggesting an association between accommodation and myopic over-correction during holographic refraction.
Tomita, Yuki; Uechi, Jun; Konno, Masahiro; Sasamoto, Saera; Iijima, Masahiro; Mizoguchi, Itaru
2018-04-17
We compared the accuracy of digital models generated by desktop-scanning of conventional impression/plaster models versus intraoral scanning. Eight ceramic spheres were attached to the buccal molar regions of dental epoxy models, and reference linear-distance measurement were determined using a contact-type coordinate measuring instrument. Alginate (AI group) and silicone (SI group) impressions were taken and converted into cast models using dental stone; the models were scanned using desktop scanner. As an alternative, intraoral scans were taken using an intraoral scanner, and digital models were generated from these scans (IOS group). Twelve linear-distance measurement combinations were calculated between different sphere-centers for all digital models. There were no significant differences among the three groups using total of six linear-distance measurements. When limited to five lineardistance measurement, the IOS group showed significantly higher accuracy compared to the AI and SI groups. Intraoral scans may be more accurate compared to scans of conventional impression/plaster models.
Rajshekar, Mithun; Julian, Roberta; Williams, Anne-Marie; Tennant, Marc; Forrest, Alex; Walsh, Laurence J; Wilson, Gary; Blizzard, Leigh
2017-09-01
Intra-oral 3D scanning of dentitions has the potential to provide a fast, accurate and non-invasive method of recording dental information. The aim of this study was to assess the reliability of measurements of human dental casts made using a portable intra-oral 3D scanner appropriate for field use. Two examiners each measured 84 tooth and 26 arch features of 50 sets of upper and lower human dental casts using digital hand-held callipers, and secondly using the measuring tool provided with the Zfx IntraScan intraoral 3D scanner applied to the virtual dental casts. The measurements were repeated at least one week later. Reliability and validity were quantified concurrently by calculation of intra-class correlation coefficients (ICC) and standard errors of measurement (SEM). The measurements of the 110 landmark features of human dental casts made using the intra-oral 3D scanner were virtually indistinguishable from measurements of the same features made using conventional hand-held callipers. The difference of means as a percentage of the average of the measurements by each method ranged between 0.030% and 1.134%. The intermethod SEMs ranged between 0.037% and 0.535%, and the inter-method ICCs ranged between 0.904 and 0.999, for both the upper and the lower arches. The inter-rater SEMs were one-half and the intra-method/rater SEMs were one-third of the inter-method values. This study demonstrates that the Zfx IntraScan intra-oral 3D scanner with its virtual on-screen measuring tool is a reliable and valid method for measuring the key features of dental casts. Copyright © 2017 Elsevier B.V. All rights reserved.
Dobbins, James T; McAdams, H Page; Sabol, John M; Chakraborty, Dev P; Kazerooni, Ella A; Reddy, Gautham P; Vikgren, Jenny; Båth, Magnus
2017-01-01
Purpose To conduct a multi-institutional, multireader study to compare the performance of digital tomosynthesis, dual-energy (DE) imaging, and conventional chest radiography for pulmonary nodule detection and management. Materials and Methods In this binational, institutional review board-approved, HIPAA-compliant prospective study, 158 subjects (43 subjects with normal findings) were enrolled at four institutions. Informed consent was obtained prior to enrollment. Subjects underwent chest computed tomography (CT) and imaging with conventional chest radiography (posteroanterior and lateral), DE imaging, and tomosynthesis with a flat-panel imaging device. Three experienced thoracic radiologists identified true locations of nodules (n = 516, 3-20-mm diameters) with CT and recommended case management by using Fleischner Society guidelines. Five other radiologists marked nodules and indicated case management by using images from conventional chest radiography, conventional chest radiography plus DE imaging, tomosynthesis, and tomosynthesis plus DE imaging. Sensitivity, specificity, and overall accuracy were measured by using the free-response receiver operating characteristic method and the receiver operating characteristic method for nodule detection and case management, respectively. Results were further analyzed according to nodule diameter categories (3-4 mm, >4 mm to 6 mm, >6 mm to 8 mm, and >8 mm to 20 mm). Results Maximum lesion localization fraction was higher for tomosynthesis than for conventional chest radiography in all nodule size categories (3.55-fold for all nodules, P < .001; 95% confidence interval [CI]: 2.96, 4.15). Case-level sensitivity was higher with tomosynthesis than with conventional chest radiography for all nodules (1.49-fold, P < .001; 95% CI: 1.25, 1.73). Case management decisions showed better overall accuracy with tomosynthesis than with conventional chest radiography, as given by the area under the receiver operating characteristic curve (1.23-fold, P < .001; 95% CI: 1.15, 1.32). There were no differences in any specificity measures. DE imaging did not significantly affect nodule detection when paired with either conventional chest radiography or tomosynthesis. Conclusion Tomosynthesis outperformed conventional chest radiography for lung nodule detection and determination of case management; DE imaging did not show significant differences over conventional chest radiography or tomosynthesis alone. These findings indicate performance likely achievable with a range of reader expertise. © RSNA, 2016 Online supplemental material is available for this article.
McAdams, H. Page; Sabol, John M.; Chakraborty, Dev P.; Kazerooni, Ella A.; Reddy, Gautham P.; Vikgren, Jenny; Båth, Magnus
2017-01-01
Purpose To conduct a multi-institutional, multireader study to compare the performance of digital tomosynthesis, dual-energy (DE) imaging, and conventional chest radiography for pulmonary nodule detection and management. Materials and Methods In this binational, institutional review board–approved, HIPAA-compliant prospective study, 158 subjects (43 subjects with normal findings) were enrolled at four institutions. Informed consent was obtained prior to enrollment. Subjects underwent chest computed tomography (CT) and imaging with conventional chest radiography (posteroanterior and lateral), DE imaging, and tomosynthesis with a flat-panel imaging device. Three experienced thoracic radiologists identified true locations of nodules (n = 516, 3–20-mm diameters) with CT and recommended case management by using Fleischner Society guidelines. Five other radiologists marked nodules and indicated case management by using images from conventional chest radiography, conventional chest radiography plus DE imaging, tomosynthesis, and tomosynthesis plus DE imaging. Sensitivity, specificity, and overall accuracy were measured by using the free-response receiver operating characteristic method and the receiver operating characteristic method for nodule detection and case management, respectively. Results were further analyzed according to nodule diameter categories (3–4 mm, >4 mm to 6 mm, >6 mm to 8 mm, and >8 mm to 20 mm). Results Maximum lesion localization fraction was higher for tomosynthesis than for conventional chest radiography in all nodule size categories (3.55-fold for all nodules, P < .001; 95% confidence interval [CI]: 2.96, 4.15). Case-level sensitivity was higher with tomosynthesis than with conventional chest radiography for all nodules (1.49-fold, P < .001; 95% CI: 1.25, 1.73). Case management decisions showed better overall accuracy with tomosynthesis than with conventional chest radiography, as given by the area under the receiver operating characteristic curve (1.23-fold, P < .001; 95% CI: 1.15, 1.32). There were no differences in any specificity measures. DE imaging did not significantly affect nodule detection when paired with either conventional chest radiography or tomosynthesis. Conclusion Tomosynthesis outperformed conventional chest radiography for lung nodule detection and determination of case management; DE imaging did not show significant differences over conventional chest radiography or tomosynthesis alone. These findings indicate performance likely achievable with a range of reader expertise. © RSNA, 2016 Online supplemental material is available for this article. PMID:27439324
Application of travel time information for traffic management : technical summary.
DOT National Transportation Integrated Search
2012-01-01
Using conventional methods, it is extremely costly to measure detailed traffic characteristics in high quality spatial or temporal resolution. For analyzing travel characteristics on roadways, the floating car method, developed in the 1920s, has hist...
Time-series analysis of sleep wake stage of rat EEG using time-dependent pattern entropy
NASA Astrophysics Data System (ADS)
Ishizaki, Ryuji; Shinba, Toshikazu; Mugishima, Go; Haraguchi, Hikaru; Inoue, Masayoshi
2008-05-01
We performed electroencephalography (EEG) for six male Wistar rats to clarify temporal behaviors at different levels of consciousness. Levels were identified both by conventional sleep analysis methods and by our novel entropy method. In our method, time-dependent pattern entropy is introduced, by which EEG is reduced to binary symbolic dynamics and the pattern of symbols in a sliding temporal window is considered. A high correlation was obtained between level of consciousness as measured by the conventional method and mean entropy in our entropy method. Mean entropy was maximal while awake (stage W) and decreased as sleep deepened. These results suggest that time-dependent pattern entropy may offer a promising method for future sleep research.
Yang, S; Liu, D G
2014-01-01
Objectives: The purposes of the study are to investigate the consistency of linear measurements between CBCT orthogonally synthesized cephalograms and conventional cephalograms and to evaluate the influence of different magnifications on these comparisons based on a simulation algorithm. Methods: Conventional cephalograms and CBCT scans were taken on 12 dry skulls with spherical metal markers. Orthogonally synthesized cephalograms were created from CBCT data. Linear parameters on both cephalograms were measured via Photoshop CS v. 5.0 (Adobe® Systems, San Jose, CA), named measurement group (MG). Bland–Altman analysis was utilized to assess the agreement of two imaging modalities. Reproducibility was investigated using paired t-test. By a specific mathematical programme “cepha”, corresponding linear parameters [mandibular corpus length (Go-Me), mandibular ramus length (Co-Go), posterior facial height (Go-S)] on these two types of cephalograms were calculated, named simulation group (SG). Bland–Altman analysis was used to assess the agreement between MG and SG. Simulated linear measurements with varying magnifications were generated based on “cepha” as well. Bland–Altman analysis was used to assess the agreement of simulated measurements between two modalities. Results: Bland–Altman analysis suggested the agreement between measurements on conventional cephalograms and orthogonally synthesized cephalograms, with a mean bias of 0.47 mm. Comparison between MG and SG showed that the difference did not reach clinical significance. The consistency between simulated measurements of both modalities with four different magnifications was demonstrated. Conclusions: Normative data of conventional cephalograms could be used for CBCT orthogonally synthesized cephalograms during this transitional period. PMID:25029593
Wellskins and slug tests: where's the bias?
NASA Astrophysics Data System (ADS)
Rovey, C. W.; Niemann, W. L.
2001-03-01
Pumping tests in an outwash sand at the Camp Dodge Site give hydraulic conductivities ( K) approximately seven times greater than conventional slug tests in the same wells. To determine if this difference is caused by skin bias, we slug tested three sets of wells, each in a progressively greater stage of development. Results were analyzed with both the conventional Bouwer-Rice method and the deconvolution method, which quantifies the skin and eliminates its effects. In 12 undeveloped wells the average skin is +4.0, causing underestimation of conventional slug-test K (Bouwer-Rice method) by approximately a factor of 2 relative to the deconvolution method. In seven nominally developed wells the skin averages just +0.34, and the Bouwer-Rice method gives K within 10% of that calculated with the deconvolution method. The Bouwer-Rice K in this group is also within 5% of that measured by natural-gradient tracer tests at the same site. In 12 intensely developed wells the average skin is <-0.82, consistent with an average skin of -1.7 measured during single-well pumping tests. At this site the maximum possible skin bias is much smaller than the difference between slug and pumping-test Ks. Moreover, the difference in K persists even in intensely developed wells with negative skins. Therefore, positive wellskins do not cause the difference in K between pumping and slug tests at this site.
Motion capture for human motion measuring by using single camera with triangle markers
NASA Astrophysics Data System (ADS)
Takahashi, Hidenori; Tanaka, Takayuki; Kaneko, Shun'ichi
2005-12-01
This study aims to realize a motion capture for measuring 3D human motions by using single camera. Although motion capture by using multiple cameras is widely used in sports field, medical field, engineering field and so on, optical motion capture method with one camera is not established. In this paper, the authors achieved a 3D motion capture by using one camera, named as Mono-MoCap (MMC), on the basis of two calibration methods and triangle markers which each length of side is given. The camera calibration methods made 3D coordinates transformation parameter and a lens distortion parameter with Modified DLT method. The triangle markers enabled to calculate a coordinate value of a depth direction on a camera coordinate. Experiments of 3D position measurement by using the MMC on a measurement space of cubic 2 m on each side show an average error of measurement of a center of gravity of a triangle marker was less than 2 mm. As compared with conventional motion capture method by using multiple cameras, the MMC has enough accuracy for 3D measurement. Also, by putting a triangle marker on each human joint, the MMC was able to capture a walking motion, a standing-up motion and a bending and stretching motion. In addition, a method using a triangle marker together with conventional spherical markers was proposed. Finally, a method to estimate a position of a marker by measuring the velocity of the marker was proposed in order to improve the accuracy of MMC.
A Comparison of Video versus Conventional Visual Reinforcement in 7- to 16-Month-Old Infants
ERIC Educational Resources Information Center
Lowery, Kristy J.; von Hapsburg, Deborah; Plyler, Erin L.; Johnstone, Patti
2009-01-01
Purpose: To compare response patterns to video visual reinforcement audiometry (VVRA) and conventional visual reinforcement audiometry (CVRA) in infants 7-16 months of age. Method: Fourteen normal-hearing infants aged 7-16 months (8 male, 6 female) participated. A repeated measures design was used. Each infant was tested with VVRA and CVRA over 2…
ERIC Educational Resources Information Center
Henaku, Christina Bampo; Pobbi, Michael Asamani
2017-01-01
Many researchers and educationist remain skeptical about the effectiveness of distance learning program and have termed it as second to the conventional training method. This perception is largely due to several challenges which exist within the management of distance learning program across the country. The general aim of the study is compare the…
A Rapid Leaf-Disc Sampler for Psychrometric Water Potential Measurements 1
Wullschleger, Stan D.; Oosterhuis, Derrick M.
1986-01-01
An instrument was designed which facilitates faster and more accurate sampling of leaf discs for psychrometric water potential measurements. The instrument consists of an aluminum housing, a spring-loaded plunger, and a modified brass-plated cork borer. The leaf-disc sampler was compared with the conventional method of sampling discs for measurement of leaf water potential with thermocouple psychrometers on a range of plant material including Gossypium hirsutum L., Zea mays L., and Begonia rex-cultorum L. The new sampler permitted a leaf disc to be excised and inserted into the psychrometer sample chamber in less than 7 seconds, which was more than twice as fast as the conventional method. This resulted in more accurate determinations of leaf water potential due to reduced evaporative water losses. The leaf-disc sampler also significantly reduced sample variability between individual measurements. This instrument can be used for many other laboratory and field measurements that necessitate leaf disc sampling. PMID:16664879
Tomassetti, Mauro; Merola, Giovanni; Martini, Elisabetta; Campanella, Luigi; Sanzò, Gabriella; Favero, Gabriele; Mazzei, Franco
2017-01-01
In this research, we developed a direct-flow surface plasmon resonance (SPR) immunosensor for ampicillin to perform direct, simple, and fast measurements of this important antibiotic. In order to better evaluate the performance, it was compared with a conventional amperometric immunosensor, working with a competitive format with the aim of finding out experimental real advantages and disadvantages of two respective methods. Results showed that certain analytical features of the new SPR immunodevice, such as the lower limit of detection (LOD) value and the width of the linear range, are poorer than those of a conventional amperometric immunosensor, which adversely affects the application to samples such as natural waters. On the other hand, the SPR immunosensor was more selective to ampicillin, and measurements were more easily and quickly attained compared to those performed with the conventional competitive immunosensor. PMID:28394296
Timing Calibration in PET Using a Time Alignment Probe
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moses, William W.; Thompson, Christopher J.
2006-05-05
We evaluate the Scanwell Time Alignment Probe for performing the timing calibration for the LBNL Prostate-Specific PET Camera. We calibrate the time delay correction factors for each detector module in the camera using two methods--using the Time Alignment Probe (which measures the time difference between the probe and each detector module) and using the conventional method (which measures the timing difference between all module-module combinations in the camera). These correction factors, which are quantized in 2 ns steps, are compared on a module-by-module basis. The values are in excellent agreement--of the 80 correction factors, 62 agree exactly, 17 differ bymore » 1 step, and 1 differs by 2 steps. We also measure on-time and off-time counting rates when the two sets of calibration factors are loaded into the camera and find that they agree within statistical error. We conclude that the performance using the Time Alignment Probe and conventional methods are equivalent.« less
A study of methods to estimate debris flow velocity
Prochaska, A.B.; Santi, P.M.; Higgins, J.D.; Cannon, S.H.
2008-01-01
Debris flow velocities are commonly back-calculated from superelevation events which require subjective estimates of radii of curvature of bends in the debris flow channel or predicted using flow equations that require the selection of appropriate rheological models and material property inputs. This research investigated difficulties associated with the use of these conventional velocity estimation methods. Radii of curvature estimates were found to vary with the extent of the channel investigated and with the scale of the media used, and back-calculated velocities varied among different investigated locations along a channel. Distinct populations of Bingham properties were found to exist between those measured by laboratory tests and those back-calculated from field data; thus, laboratory-obtained values would not be representative of field-scale debris flow behavior. To avoid these difficulties with conventional methods, a new preliminary velocity estimation method is presented that statistically relates flow velocity to the channel slope and the flow depth. This method presents ranges of reasonable velocity predictions based on 30 previously measured velocities. ?? 2008 Springer-Verlag.
Yokoyama, Hidekatsu
2012-01-01
Direct irradiation of a sample using a quartz oscillator operating at 250 MHz was performed for EPR measurements. Because a quartz oscillator is a frequency fixed oscillator, the operating frequency of an EPR resonator (loop-gap type) was tuned to that of the quartz oscillator by using a single-turn coil with a varactor diode attached (frequency shift coil). Because the frequency shift coil was mobile, the distance between the EPR resonator and the coil could be changed. Coarse control of the resonant frequency was achieved by changing this distance mechanically, while fine frequency control was implemented by changing the capacitance of the varactor electrically. In this condition, EPR measurements of a phantom (comprised of agar with a nitroxide radical and physiological saline solution) were made. To compare the presented method with a conventional method, the EPR measurements were also done by using a synthesizer at the same EPR frequency. In the conventional method, the noise level increased at high irradiation power. Because such an increase in the noise was not observed in the presented method, high sensitivity was obtained at high irradiation power. Copyright © 2011 Elsevier Inc. All rights reserved.
Ergonomic evaluation of conventional and improved methods of aonla pricking with women workers.
Rai, Arpana; Gandhi, Sudesh; Sharma, D K
2012-01-01
Conventional and improved methods of aonla pricking were evaluated ergonomically on an experiment conducted for 20 minute with women workers. The working heart rate, energy expenditure rate, total cardiac cost of work and physiological cost of work with conventional tools varied from 93-102 beats.min-1, 6-7.5 kJ.min-1, 285-470 beats, 14 -23 beats.min-1 while with machine varied from 96-105 beats.min-1, 6.5-8 kJ.min-1 , 336-540 beats, 16-27 beats.min-1 respectively. OWAS score for conventional method was 2 indicating corrective measures in near future while with machine was 1 indicating no corrective measures. Result of Nordic Musculoskeletal Questionnaire revealed that subjects complaint of pain in back, neck, right shoulder and right hand due to unnatural body posture and repetitive movement with hand tool. Moreover pricking was carried out in improper lighting conditions (200-300 lux) resulting into finger injuries from sharp edges of hand tool, whereas with machine no such problems were observed. Output with machine increased thrice than hand pricking in a given time. Machine was found useful in terms of saving time, increased productivity, enhanced safety and comfort as involved improved posture, was easy to handle and operate, thus increasing efficiency of the worker leading to better quality of life.
NASA Astrophysics Data System (ADS)
Chinone, N.; Yamasue, K.; Hiranaga, Y.; Honda, K.; Cho, Y.
2012-11-01
Scanning nonlinear dielectric microscopy (SNDM) can be used to visualize polarization distributions in ferroelectric materials and dopant profiles in semiconductor devices. Without using a special sharp tip, we achieved an improved lateral resolution in SNDM through the measurement of super-higher-order nonlinearity up to the fourth order. We observed a multidomain single crystal congruent LiTaO3 (CLT) sample, and a cross section of a metal-oxide-semiconductor (MOS) field-effect-transistor (FET). The imaged domain boundaries of the CLT were narrower in the super-higher-order images than in the conventional image. Compared to the conventional method, the super-higher-order method resolved the more detailed structure of the MOSFET.
Equivalent orthotropic elastic moduli identification method for laminated electrical steel sheets
NASA Astrophysics Data System (ADS)
Saito, Akira; Nishikawa, Yasunari; Yamasaki, Shintaro; Fujita, Kikuo; Kawamoto, Atsushi; Kuroishi, Masakatsu; Nakai, Hideo
2016-05-01
In this paper, a combined numerical-experimental methodology for the identification of elastic moduli of orthotropic media is presented. Special attention is given to the laminated electrical steel sheets, which are modeled as orthotropic media with nine independent engineering elastic moduli. The elastic moduli are determined specifically for use with finite element vibration analyses. We propose a three-step methodology based on a conventional nonlinear least squares fit between measured and computed natural frequencies. The methodology consists of: (1) successive augmentations of the objective function by increasing the number of modes, (2) initial condition updates, and (3) appropriate selection of the natural frequencies based on their sensitivities on the elastic moduli. Using the results of numerical experiments, it is shown that the proposed method achieves more accurate converged solution than a conventional approach. Finally, the proposed method is applied to measured natural frequencies and mode shapes of the laminated electrical steel sheets. It is shown that the method can successfully identify the orthotropic elastic moduli that can reproduce the measured natural frequencies and frequency response functions by using finite element analyses with a reasonable accuracy.
NASA Technical Reports Server (NTRS)
Molino, J. A.
1982-01-01
A review of 34 studies indicates that several factors or variables might be important in providing a psychoacoustic foundation for measurements of the noise from helicopters. These factors are phase relations, tail rotor noise, repetition rate, crest level, and generic differences between conventional aircraft and helicopters. Particular attention was given to the impulsive noise known as blade slap. Analysis of the evidence for and against each factor reveals that, for the present state of scientific knowledge, none of these factors should be regarded as the basis for a significant noise measurement correction due to impulsive blade slap. The current method of measuring effective perceived noise level for conventional aircraft appears to be adequate for measuring helicopter noise as well.
Implementation of intelligent compaction technologies for road constructions in Wyoming.
DOT National Transportation Integrated Search
2015-03-01
Conventional test methods for roadway compaction cover less than 1% of roadway; whereas, intelligent : compaction (IC) offers a method to measure 100% of a roadway. IC offers the ability to increase : compaction uniformity of soils and asphalt paveme...
Microsurgical Versus Conventional Skin Closure in the Laboratory Rat (Rattus norvegicus)
microscope using 6/0 monocryl. Wound strength was measured using a published method. A harvested incision was suspended with forceps and water was slowly...evaluated histologically using published methods to examine vascularization, fibroblast proliferation, inflammation and epithelialization. Results
Fast measurement of bacterial susceptibility to antibiotics
NASA Technical Reports Server (NTRS)
Chappelle, E. W.; Picciolo, G. L.; Schrock, C. G.
1977-01-01
Method, based on photoanalysis of adenosine triphosphate using light-emitting reaction with luciferase-luciferin technique, saves time by eliminating isolation period required by conventional methods. Technique is also used to determine presence of infection as well as susceptibilities to several antibiotics.
Sacristán, Carlos; Carballo, Matilde; Muñoz, María Jesús; Bellière, Edwige Nina; Neves, Elena; Nogal, Verónica; Esperón, Fernando
2015-12-15
Cetacean morbillivirus (CeMV) (family Paramyxoviridae, genus Morbillivirus) is considered the most pathogenic virus of cetaceans. It was first implicated in the bottlenose dolphin (Tursiops truncatus) mass stranding episode along the Northwestern Atlantic coast in the late 1980s, and in several more recent worldwide epizootics in different Odontoceti species. This study describes a new one step real-time reverse transcription fast polymerase chain reaction (real-time RT-fast PCR) method based on SYBR(®) Green to detect a fragment of the CeMV fusion protein gene. This primer set also works for conventional RT-PCR diagnosis. This method detected and identified all three well-characterized strains of CeMV: porpoise morbillivirus (PMV), dolphin morbillivirus (DMV) and pilot whale morbillivirus (PWMV). Relative sensitivity was measured by comparing the results obtained from 10-fold dilution series of PMV and DMV positive controls and a PWMV field sample, to those obtained by the previously described conventional phosphoprotein gene based RT-PCR method. Both the conventional and real-time RT-PCR methods involving the fusion protein gene were 100- to 1000-fold more sensitive than the previously described conventional RT-PCR method. Copyright © 2015 Elsevier B.V. All rights reserved.
Measuring Surface Tension of a Flowing Soap Film
NASA Astrophysics Data System (ADS)
Sane, Aakash; Kim, Ildoo; Mandre, Shreyas
2016-11-01
It is well known that surface tension is sensitive to the presence of surfactants and many conventional methods exist to measure it. These techniques measure surface tension either by intruding into the system or by changing its geometry. Use of conventional methods in the case of a flowing soap film is not feasible because intruding the soap film changes surface tension due to Marangoni effect. We present a technique in which we measure the surface tension in situ of a flowing soap film without intruding into the film. A flowing soap film is created by letting soap solution drip between two wires. The interaction of the soap film with the wires causes the wires to deflect which can be measured. Surface tension is calculated using a relation between curvature of the wires and the surface tension. Our measurements indicate that the surface tension of the flowing soap film for our setup is around 0.05 N/m. The nature of this technique makes it favorable for measuring surface tension of flowing soap films whose properties change on intrusion.
Manufacturing Implant Supported Auricular Prostheses by Rapid Prototyping Techniques
Karatas, Meltem Ozdemir; Cifter, Ebru Demet; Ozenen, Didem Ozdemir; Balik, Ali; Tuncer, Erman Bulent
2011-01-01
Maxillofacial prostheses are usually fabricated on the models obtained following the impression procedures. Disadvantages of conventional impression techniques used in production of facial prosthesis are deformation of soft tissues caused by impression material and disturbance of the patient due to. Additionally production of prosthesis by conventional methods takes longer time. Recently, rapid prototyping techniques have been developed for extraoral prosthesis in order to reduce these disadvantages of conventional methods. Rapid prototyping technique has the potential to simplify the procedure and decrease the laboratory work required. It eliminates the need for measurement impression procedures and preparation of wax model to be performed by prosthodontists themselves In the near future this technology will become a standard for fabricating maxillofacial prostheses. PMID:21912504
Method for measuring visual resolution at the retinal level.
Liang, J; Westheimer, G
1993-08-01
To measure the intrinsic resolving capacity of the retinal and neural levels of vision, we devised a method that creates two lines with controllable contrast on the retina. The line separation can be varied at will, down to values below those achievable with conventional optical techniques. Implementation of the method with use of a He-Ne laser leads to a procedure that permits analysis of the performance of the human visual apparatus.
NASA Astrophysics Data System (ADS)
Paynter, D.; Weston, S. J.; Cosgrove, V. P.; Thwaites, D. I.
2018-01-01
Flattening filter free (FFF) beams have reached widespread use for clinical treatment deliveries. The usual methods for FFF beam characterisation for their quality assurance (QA) require the use of associated conventional flattened beams (cFF). Methods for QA of FFF without the need to use associated cFF beams are presented and evaluated against current methods for both FFF and cFF beams. Inflection point normalisation is evaluated against conventional methods for the determination of field size and penumbra for field sizes from 3 cm × 3 cm to 40 cm × 40cm at depths from dmax to 20 cm in water for matched and unmatched FFF beams and for cFF beams. A method for measuring symmetry in the cross plane direction is suggested and evaluated as FFF beams are insensitive to symmetry changes in this direction. Methods for characterising beam energy are evaluated and the impact of beam energy on profile shape compared to that of cFF beams. In-plane symmetry can be measured, as can cFF beams, using observed changes in profile, whereas cross-plane symmetry can be measured by acquiring profiles at collimator angles 0 and 180. Beam energy and ‘unflatness’ can be measured as with cFF beams from observed shifts in profile with changing beam energy. Normalising the inflection points of FFF beams to 55% results in an equivalent penumbra and field size measurement within 0.5 mm of conventional methods with the exception of 40 cm × 40 cm fields at a depth of 20 cm. New proposed methods are presented that make it possible to independently carry out set up and QA measurements on beam energy, flatness, symmetry and field size of an FFF beam without the need to reference to an equivalent flattened beam of the same energy. The methods proposed can also be used to carry out this QA for flattened beams, resulting in universal definitions and methods for MV beams. This is presented for beams produced by an Elekta linear accelerator, but is anticipated to also apply to other manufacturers’ beams.
Micro-scale temperature measurement method using fluorescence polarization
NASA Astrophysics Data System (ADS)
Tatsumi, K.; Hsu, C.-H.; Suzuki, A.; Nakabe, K.
2016-09-01
A novel method that can measure the fluid temperature in microscopic scale by measuring the fluorescence polarization is described in this paper. The measurement technique is not influenced by the quenching effects which appears in conventional LIF methods and is believed to show a higher reliability in temperature measurements. Experiment was performed using a microchannel flow and fluorescent molecule probes, and the effects of the fluid temperature, fluid viscosity, measurement time, and pH of the solution on the measured fluorescence polarization degree are discussed to understand the basic characteristics of the present method. The results showed that fluorescence polarization is considerably less sensible to these quenching factors. A good correlation with the fluid temperature, on the other hand, was obtained and agreed well with the theoretical values confirming the feasibility of the method.
Ender, Andreas; Mehl, Albert
2015-01-01
To investigate the accuracy of conventional and digital impression methods used to obtain full-arch impressions by using an in-vitro reference model. Eight different conventional (polyether, POE; vinylsiloxanether, VSE; direct scannable vinylsiloxanether, VSES; and irreversible hydrocolloid, ALG) and digital (CEREC Bluecam, CER; CEREC Omnicam, OC; Cadent iTero, ITE; and Lava COS, LAV) full-arch impressions were obtained from a reference model with a known morphology, using a highly accurate reference scanner. The impressions obtained were then compared with the original geometry of the reference model and within each test group. A point-to-point measurement of the surface of the model using the signed nearest neighbour method resulted in a mean (10%-90%)/2 percentile value for the difference between the impression and original model (trueness) as well as the difference between impressions within a test group (precision). Trueness values ranged from 11.5 μm (VSE) to 60.2 μm (POE), and precision ranged from 12.3 μm (VSE) to 66.7 μm (POE). Among the test groups, VSE, VSES, and CER showed the highest trueness and precision. The deviation pattern varied with the impression method. Conventional impressions showed high accuracy across the full dental arch in all groups, except POE and ALG. Conventional and digital impression methods show differences regarding full-arch accuracy. Digital impression systems reveal higher local deviations of the full-arch model. Digital intraoral impression systems do not show superior accuracy compared to highly accurate conventional impression techniques. However, they provide excellent clinical results within their indications applying the correct scanning technique.
Oliveira, Laís Rani Sales; Braga, Stella Sueli Lourenço; Bicalho, Aline Arêdes; Ribeiro, Maria Tereza Hordones; Price, Richard Bengt; Soares, Carlos José
2018-07-01
To describe a method of measuring the molar cusp deformation using micro-computed tomography (micro-CT), the propagation of enamel cracks using transillumination, and the effects of hygroscopic expansion after incremental and bulk-filling resin composite restorations. Twenty human molars received standardized Class II mesio-occlusal-distal cavity preparations. They were restored with either a bulk-fill resin composite, X-tra fil (XTRA), or a conventional resin composite, Filtek Z100 (Z100). The resin composites were tested for post-gel shrinkage using a strain gauge method. Cusp deformation (CD) was evaluated using the images obtained using a micro-CT protocol and using a strain-gauge method. Enamel cracks were detected using transillumination. The post-gel shrinkage of Z100 was higher than XTRA (P < 0.001). The amount of cusp deformation produced using Z100 was higher compared to XTRA, irrespective of the measurement method used (P < 0.001). The thinner lingual cusp always had a higher CD than the buccal cusp, irrespective of the measurement method (P < 0.001). A positive correlation (r = 0.78) was found between cusp deformation measured by micro-CT or by the strain-gauge method. After hygroscopic expansion of the resin composite, the cusp displacement recovered around 85% (P < 0.001). After restoration, Z100 produced more cracks than XTRA (P = 0.012). Micro-CT was an effective method for evaluating the cusp deformation. Transillumination was effective for detecting enamel cracks. There were fewer negative effects of polymerization shrinkage in bulk-fill resin restorations using XTRA than for the conventional incremental filling technique using conventional composite resin Z100. Shrinkage and cusp deformation are directly related to the formation of enamel cracks. Cusp deformation and crack propagation may increase the risk of tooth fracture. Copyright © 2018 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Saito, Terubumi; Tatsuta, Muneaki; Abe, Yamato; Takesawa, Minato
2018-02-01
We have succeeded in the direct measurement for solar cell/module internal conversion efficiency based on a calorimetric method or electrical substitution method by which the absorbed radiant power is determined by replacing the heat absorbed in the cell/module with the electrical power. The technique is advantageous in that the reflectance and transmittance measurements, which are required in the conventional methods, are not necessary. Also, the internal quantum efficiency can be derived from conversion efficiencies by using the average photon energy. Agreements of the measured data with the values estimated from the nominal values support the validity of this technique.
Yukimasa, Nobuyasu; Miura, Keisuke; Miyagawa, Yukiko; Fukuchi, Kunihiko
2015-01-01
Automated nontreponemal and treponemal test reagents based on the latex agglutination method (immunoticles auto3 RPR: ITA3RPR and immunoticles auto3 TP: ITA3TP) have been developed to improve the issues of conventional manual methods such as their subjectivity, a massive amount of assays, and so on. We evaluated these reagents in regards to their performance, reactivity to antibody isotype, and their clinical significance. ITA3RPR and ITA3TP were measured using a clinical chemistry analyzer. Reactivity to antibody isotype was examined by gel filtration analysis. ITA3RPR and ITA3TP showed reactivity to both IgM- and IgG-class antibodies and detected early infections. ITA3RPR was verified to show a higher reactivity to IgM-class antibodies than the conventional methods. ITA3RPR correlated with VDRL in the high titer range, and measurement values decreased with treatment. ITA3RPR showed a negative result earlier after treatment than conventional methods. ITA3TP showed high specificity and did not give any false-negative reaction. Significant differences in the measurement values of ITA3RPR between the infective and previous group were verified. The double test of ITA3RPR and ITA3TP enables efficient and objective judgment for syphilis diagnosis and treatments, achieving clinical availability. Copyright © 2014 Japanese Society of Chemotherapy and The Japanese Association for Infectious Diseases. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Kadem, L.; Knapp, Y.; Pibarot, P.; Bertrand, E.; Garcia, D.; Durand, L. G.; Rieu, R.
2005-12-01
The effective orifice area (EOA) is the most commonly used parameter to assess the severity of aortic valve stenosis as well as the performance of valve substitutes. Particle image velocimetry (PIV) may be used for in vitro estimation of valve EOA. In the present study, we propose a new and simple method based on Howe’s developments of Lighthill’s aero-acoustic theory. This method is based on an acoustical source term (AST) to estimate the EOA from the transvalvular flow velocity measurements obtained by PIV. The EOAs measured by the AST method downstream of three sharp-edged orifices were in excellent agreement with the EOAs predicted from the potential flow theory used as the reference method in this study. Moreover, the AST method was more accurate than other conventional PIV methods based on streamlines, inflexion point or vorticity to predict the theoretical EOAs. The superiority of the AST method is likely due to the nonlinear form of the AST. There was also an excellent agreement between the EOAs measured by the AST method downstream of the three sharp-edged orifices as well as downstream of a bioprosthetic valve with those obtained by the conventional clinical method based on Doppler-echocardiographic measurements of transvalvular velocity. The results of this study suggest that this new simple PIV method provides an accurate estimation of the aortic valve flow EOA. This new method may thus be used as a reference method to estimate the EOA in experimental investigation of the performance of valve substitutes and to validate Doppler-echocardiographic measurements under various physiologic and pathologic flow conditions.
Improving Arterial Spin Labeling by Using Deep Learning.
Kim, Ki Hwan; Choi, Seung Hong; Park, Sung-Hong
2018-05-01
Purpose To develop a deep learning algorithm that generates arterial spin labeling (ASL) perfusion images with higher accuracy and robustness by using a smaller number of subtraction images. Materials and Methods For ASL image generation from pair-wise subtraction, we used a convolutional neural network (CNN) as a deep learning algorithm. The ground truth perfusion images were generated by averaging six or seven pairwise subtraction images acquired with (a) conventional pseudocontinuous arterial spin labeling from seven healthy subjects or (b) Hadamard-encoded pseudocontinuous ASL from 114 patients with various diseases. CNNs were trained to generate perfusion images from a smaller number (two or three) of subtraction images and evaluated by means of cross-validation. CNNs from the patient data sets were also tested on 26 separate stroke data sets. CNNs were compared with the conventional averaging method in terms of mean square error and radiologic score by using a paired t test and/or Wilcoxon signed-rank test. Results Mean square errors were approximately 40% lower than those of the conventional averaging method for the cross-validation with the healthy subjects and patients and the separate test with the patients who had experienced a stroke (P < .001). Region-of-interest analysis in stroke regions showed that cerebral blood flow maps from CNN (mean ± standard deviation, 19.7 mL per 100 g/min ± 9.7) had smaller mean square errors than those determined with the conventional averaging method (43.2 ± 29.8) (P < .001). Radiologic scoring demonstrated that CNNs suppressed noise and motion and/or segmentation artifacts better than the conventional averaging method did (P < .001). Conclusion CNNs provided superior perfusion image quality and more accurate perfusion measurement compared with those of the conventional averaging method for generation of ASL images from pair-wise subtraction images. © RSNA, 2017.
Kim, Sung Jae; Kim, Sung Hwan; Kim, Young Hwan; Chun, Yong Min
2015-01-01
The authors have observed a failure to achieve secure fixation in elderly patients when inserting a half-pin at the anteromedial surface of the tibia. The purpose of this study was to compare two methods for inserting a half-pin at tibia diaphysis in elderly patients. Twenty cadaveric tibias were divided into Group C or V. A half-pin was inserted into the tibias of Group C via the conventional method, from the anteromedial surface to the interosseous border of the tibia diaphysis, and into the tibias of Group V via the vertical method, from the anterior border to the posterior surface at the same level. The maximum insertion torque was measured during the bicortical insertion with a torque driver. The thickness of the cortex was measured by micro-computed tomography. The relationship between the thickness of the cortex engaged and the insertion torque was investigated. The maximum insertion torque and the thickness of the cortex were significantly higher in Group V than Group C. Both groups exhibited a statistically significant linear correlation between torque and thickness by Spearman's rank correlation analysis. Half-pins inserted by the vertical method achieved purchase of more cortex than those inserted by the conventional method. Considering that cortical thickness and insertion torque in Group V were significantly greater than those in Group C, we suggest that the vertical method of half-pin insertion may be an alternative to the conventional method in elderly patients.
Ballistics-Electron-Microscopy and Spectroscopy of Metal/GaN Interfaces
NASA Technical Reports Server (NTRS)
Bell, L. D.; Smith, R. P.; McDermott, B. T.; Gertner, E. R.; Pittman, R.; Pierson, R. L.; Sullivan, G. J.
1997-01-01
BEEM spectroscopy and imaging have been applied to the Au/GaN interface. In contrast to previous BEEM measurements, spectra yield a Schottky barrier height of 1.04eV that agrees well with the highest values measured by conventional methods.
A pump monitoring approach to irrigation pumping plant testing
USDA-ARS?s Scientific Manuscript database
The conventional approach for evaluating irrigation pumping plant performance has been an instantaneous spot measurement approach. Using this method, the tester measures the necessary work and energy use parameters to determine overall pumping plant performance. The primary limitation of this appr...
NASA Astrophysics Data System (ADS)
Ju, Yang; Inoue, Kojiro; Saka, Masumi; Abe, Hiroyuki
2002-11-01
We present a method for quantitative measurement of electrical conductivity of semiconductor wafers in a contactless fashion by using millimeter waves. A focusing sensor was developed to focus a 110 GHz millimeter wave beam on the surface of a silicon wafer. The amplitude and the phase of the reflection coefficient of the millimeter wave signal were measured by which electrical conductivity of the wafer was determined quantitatively, independent of the permittivity and thickness of the wafers. The conductivity obtained by this method agrees well with that measured by the conventional four-point-probe method.
NASA Technical Reports Server (NTRS)
Frohberg, M. G.; Betz, G.
1982-01-01
A method was tested for measuring the enthalpies of mixing of liquid metallic alloying systems, involving the combination of two samples in the electromagnetic field of an induction coil. The heat of solution is calculated from the pyrometrically measured temperature effect, the heat capacity of the alloy, and the heat content of the added sample. The usefulness of the method was tested experimentally with iron-copper and niobium-silicon systems. This method should be especially applicable to high-melting alloys, for which conventional measurements have failed.
Microrheology with optical tweezers: measuring the relative viscosity of solutions 'at a glance'.
Tassieri, Manlio; Del Giudice, Francesco; Robertson, Emma J; Jain, Neena; Fries, Bettina; Wilson, Rab; Glidle, Andrew; Greco, Francesco; Netti, Paolo Antonio; Maffettone, Pier Luca; Bicanic, Tihana; Cooper, Jonathan M
2015-03-06
We present a straightforward method for measuring the relative viscosity of fluids via a simple graphical analysis of the normalised position autocorrelation function of an optically trapped bead, without the need of embarking on laborious calculations. The advantages of the proposed microrheology method are evident when it is adopted for measurements of materials whose availability is limited, such as those involved in biological studies. The method has been validated by direct comparison with conventional bulk rheology methods, and has been applied both to characterise synthetic linear polyelectrolytes solutions and to study biomedical samples.
Microrheology with Optical Tweezers: Measuring the relative viscosity of solutions ‘at a glance'
Tassieri, Manlio; Giudice, Francesco Del; Robertson, Emma J.; Jain, Neena; Fries, Bettina; Wilson, Rab; Glidle, Andrew; Greco, Francesco; Netti, Paolo Antonio; Maffettone, Pier Luca; Bicanic, Tihana; Cooper, Jonathan M.
2015-01-01
We present a straightforward method for measuring the relative viscosity of fluids via a simple graphical analysis of the normalised position autocorrelation function of an optically trapped bead, without the need of embarking on laborious calculations. The advantages of the proposed microrheology method are evident when it is adopted for measurements of materials whose availability is limited, such as those involved in biological studies. The method has been validated by direct comparison with conventional bulk rheology methods, and has been applied both to characterise synthetic linear polyelectrolytes solutions and to study biomedical samples. PMID:25743468
Obtaining high-resolution velocity spectra using weighted semblance
NASA Astrophysics Data System (ADS)
Ebrahimi, Saleh; Kahoo, Amin Roshandel; Porsani, Milton J.; Kalateh, Ali Nejati
2017-02-01
Velocity analysis employs coherency measurement along a hyperbolic or non-hyperbolic trajectory time window to build velocity spectra. Accuracy and resolution are strictly related to the method of coherency measurements. Semblance, the most common coherence measure, has poor resolution velocity which affects one's ability to distinguish and pick distinct peaks. Increase the resolution of the semblance velocity spectra causes the accuracy of estimated velocity for normal moveout correction and stacking is improved. The low resolution of semblance spectra depends on its low sensitivity to velocity changes. In this paper, we present a new weighted semblance method that ensures high-resolution velocity spectra. To increase the resolution of semblance spectra, we introduce two weighting functions based on the first to second singular values ratio of the time window and the position of the seismic wavelet in the time window to the semblance equation. We test the method on both synthetic and real field data to compare the resolution of weighted and conventional semblance methods. Numerical examples with synthetic and real seismic data indicate that the new proposed weighted semblance method provides higher resolution than conventional semblance and can separate the reflectors which are mixed in the semblance spectrum.
Passive wireless strain monitoring of tire using capacitance change
NASA Astrophysics Data System (ADS)
Matsuzaki, Ryosuke; Todoroki, Akira
2004-07-01
In-service strain monitoring of tires of automobile is quite effective for improving the reliability of tires and Anti-lock Braking System (ABS). Since conventional strain gages have high stiffness and require lead wires, the conventional strain gages are cumbersome for the strain measurements of the tires. In a previous study, the authors proposed a new wireless strain monitoring method that adopts the tire itself as a sensor, with an oscillating circuit. This method is very simple and useful, but it requires a battery to activate the oscillating circuit. In the present study, the previous method for wireless tire monitoring is improved to produce a passive wireless sensor. A specimen made from a commercially available tire is connected to a tuning circuit comprising an inductance and a capacitance as a condenser. The capacitance change of tire causes change of the tuning frequency. This change of the tuned radio wave enables us to measure the applied strain of the specimen wirelessly, without any power supply from outside. This new passive wireless method is applied to a specimen and the static applied strain is measured. As a result, the method is experimentally shown to be effective as a passive wireless strain monitoring of tires.
Label-Free, Flow-Imaging Methods for Determination of Cell Concentration and Viability.
Sediq, A S; Klem, R; Nejadnik, M R; Meij, P; Jiskoot, Wim
2018-05-30
To investigate the potential of two flow imaging microscopy (FIM) techniques (Micro-Flow Imaging (MFI) and FlowCAM) to determine total cell concentration and cell viability. B-lineage acute lymphoblastic leukemia (B-ALL) cells of 2 different donors were exposed to ambient conditions. Samples were taken at different days and measured with MFI, FlowCAM, hemocytometry and automated cell counting. Dead and live cells from a fresh B-ALL cell suspension were fractionated by flow cytometry in order to derive software filters based on morphological parameters of separate cell populations with MFI and FlowCAM. The filter sets were used to assess cell viability in the measured samples. All techniques gave fairly similar cell concentration values over the whole incubation period. MFI showed to be superior with respect to precision, whereas FlowCAM provided particle images with a higher resolution. Moreover, both FIM methods were able to provide similar results for cell viability as the conventional methods (hemocytometry and automated cell counting). FIM-based methods may be advantageous over conventional cell methods for determining total cell concentration and cell viability, as FIM measures much larger sample volumes, does not require labeling, is less laborious and provides images of individual cells.
Dose and image quality for a cone-beam C-arm CT system.
Fahrig, Rebecca; Dixon, Robert; Payne, Thomas; Morin, Richard L; Ganguly, Arundhuti; Strobel, Norbert
2006-12-01
We assess dose and image quality of a state-of-the-art angiographic C-arm system (Axiom Artis dTA, Siemens Medical Solutions, Forchheim, Germany) for three-dimensional neuro-imaging at various dose levels and tube voltages and an associated measurement method. Unlike conventional CT, the beam length covers the entire phantom, hence, the concept of computed tomography dose index (CTDI) is not the metric of choice, and one can revert to conventional dosimetry methods by directly measuring the dose at various points using a small ion chamber. This method allows us to define and compute a new dose metric that is appropriate for a direct comparison with the familiar CTDIw of conventional CT. A perception study involving the CATPHAN 600 indicates that one can expect to see at least the 9 mm inset with 0.5% nominal contrast at the recommended head-scan dose (60 mGy) when using tube voltages ranging from 70 kVp to 125 kVp. When analyzing the impact of tube voltage on image quality at a fixed dose, we found that lower tube voltages gave improved low contrast detectability for small-diameter objects. The relationships between kVp, image noise, dose, and contrast perception are discussed.
Management system to a photovoltaic panel based on the measurement of short-circuit currents
NASA Astrophysics Data System (ADS)
Dordescu, M.
2016-12-01
This article is devoted to fundamental issues arising from operation in terms of increased energy efficiency for photovoltaic panel (PV). By measuring the current from functioning cage determine the current value prescribed amount corresponding to maximum power point results obtained by requiring proof of pregnancy with this method are the maximum energy possible, thus justifying the usefulness of this process very simple and inexpensive to implement in practice. The proposed adjustment method is much simpler and more economical than conventional methods that rely on measuring power cut.
Densitometry By Acoustic Levitation
NASA Technical Reports Server (NTRS)
Trinh, Eugene H.
1989-01-01
"Static" and "dynamic" methods developed for measuring mass density of acoustically levitated solid particle or liquid drop. "Static" method, unknown density of sample found by comparison with another sample of known density. "Dynamic" method practiced with or without gravitational field. Advantages over conventional density-measuring techniques: sample does not have to make contact with container or other solid surface, size and shape of samples do not affect measurement significantly, sound field does not have to be know in detail, and sample can be smaller than microliter. Detailed knowledge of acoustic field not necessary.
A New Approach to Detect Mover Position in Linear Motors Using Magnetic Sensors
Paul, Sarbajit; Chang, Junghwan
2015-01-01
A new method to detect the mover position of a linear motor is proposed in this paper. This method employs a simple cheap Hall Effect sensor-based magnetic sensor unit to detect the mover position of the linear motor. With the movement of the linear motor, Hall Effect sensor modules electrically separated 120° along with the idea of three phase balanced condition (va + vb + vc = 0) are used to produce three phase signals. The amplitude of the sensor output voltage signals are adjusted to unit amplitude to minimize the amplitude errors. With the unit amplitude signals three to two phase transformation is done to reduce the three multiples of harmonic components. The final output thus obtained is converted to position data by the use of arctangent function. The measurement accuracy of the new method is analyzed by experiments and compared with the conventional two phase method. Using the same number of sensor modules as the conventional two phase method, the proposed method gives more accurate position information compared to the conventional system where sensors are separated by 90° electrical angles. PMID:26506348
New methods, algorithms, and software for rapid mapping of tree positions in coordinate forest plots
A. Dan Wilson
2000-01-01
The theories and methodologies for two new tree mapping methods, the Sequential-target method and the Plot-origin radial method, are described. The methods accommodate the use of any conventional distance measuring device and compass to collect horizontal distance and azimuth data between source or reference positions (origins) and target trees. Conversion equations...
NASA Technical Reports Server (NTRS)
Schweikhard, W. G.; Dennon, S. R.
1986-01-01
A review of the Melick method of inlet flow dynamic distortion prediction by statistical means is provided. These developments include the general Melick approach with full dynamic measurements, a limited dynamic measurement approach, and a turbulence modelling approach which requires no dynamic rms pressure fluctuation measurements. These modifications are evaluated by comparing predicted and measured peak instantaneous distortion levels from provisional inlet data sets. A nonlinear mean-line following vortex model is proposed and evaluated as a potential criterion for improving the peak instantaneous distortion map generated from the conventional linear vortex of the Melick method. The model is simplified to a series of linear vortex segments which lay along the mean line. Maps generated with this new approach are compared with conventionally generated maps, as well as measured peak instantaneous maps. Inlet data sets include subsonic, transonic, and supersonic inlets under various flight conditions.
Parallel imaging of knee cartilage at 3 Tesla.
Zuo, Jin; Li, Xiaojuan; Banerjee, Suchandrima; Han, Eric; Majumdar, Sharmila
2007-10-01
To evaluate the feasibility and reproducibility of quantitative cartilage imaging with parallel imaging at 3T and to determine the impact of the acceleration factor (AF) on morphological and relaxation measurements. An eight-channel phased-array knee coil was employed for conventional and parallel imaging on a 3T scanner. The imaging protocol consisted of a T2-weighted fast spin echo (FSE), a 3D-spoiled gradient echo (SPGR), a custom 3D-SPGR T1rho, and a 3D-SPGR T2 sequence. Parallel imaging was performed with an array spatial sensitivity technique (ASSET). The left knees of six healthy volunteers were scanned with both conventional and parallel imaging (AF = 2). Morphological parameters and relaxation maps from parallel imaging methods (AF = 2) showed comparable results with conventional method. The intraclass correlation coefficient (ICC) of the two methods for cartilage volume, mean cartilage thickness, T1rho, and T2 were 0.999, 0.977, 0.964, and 0.969, respectively, while demonstrating excellent reproducibility. No significant measurement differences were found when AF reached 3 despite the low signal-to-noise ratio (SNR). The study demonstrated that parallel imaging can be applied to current knee cartilage quantification at AF = 2 without degrading measurement accuracy with good reproducibility while effectively reducing scan time. Shorter imaging times can be achieved with higher AF at the cost of SNR. (c) 2007 Wiley-Liss, Inc.
Surface photovoltage method extended to silicon solar cell junction
NASA Technical Reports Server (NTRS)
Wang, E. Y.; Baraona, C. R.; Brandhorst, H. W., Jr.
1974-01-01
The conventional surface photovoltage (SPV) method is extended to the measurement of the minority carrier diffusion length in diffused semiconductor junctions of the type used in a silicon solar cell. The minority carrier diffusion values obtained by the SPV method agree well with those obtained by the X-ray method. Agreement within experimental error is also obtained between the minority carrier diffusion lengths in solar cell diffusion junctions and in the same materials with n-regions removed by etching, when the SPV method was used in the measurements.
Three-Signal Method for Accurate Measurements of Depolarization Ratio with Lidar
NASA Technical Reports Server (NTRS)
Reichardt, Jens; Baumgart, Rudolf; McGee, Thomsa J.
2003-01-01
A method is presented that permits the determination of atmospheric depolarization-ratio profiles from three elastic-backscatter lidar signals with different sensitivity to the state of polarization of the backscattered light. The three-signal method is insensitive to experimental errors and does not require calibration of the measurement, which could cause large systematic uncertainties of the results, as is the case in the lidar technique conventionally used for the observation of depolarization ratios.
Method and apparatus for measuring nuclear magnetic properties
Weitekamp, D.P.; Bielecki, A.; Zax, D.B.; Zilm, K.W.; Pines, A.
1987-12-01
A method for studying the chemical and structural characteristics of materials is disclosed. The method includes placement of a sample material in a high strength polarizing magnetic field to order the sample nuclei. The condition used to order the sample is then removed abruptly and the ordering of the sample allowed to evolve for a time interval. At the end of the time interval, the ordering of the sample is measured by conventional nuclear magnetic resonance techniques. 5 figs.
Method and apparatus for measuring nuclear magnetic properties
Weitekamp, Daniel P.; Bielecki, Anthony; Zax, David B.; Zilm, Kurt W.; Pines, Alexander
1987-01-01
A method for studying the chemical and structural characteristics of materials is disclosed. The method includes placement of a sample material in a high strength polarizing magnetic field to order the sample nucleii. The condition used to order the sample is then removed abruptly and the ordering of the sample allowed to evolve for a time interval. At the end of the time interval, the ordering of the sample is measured by conventional nuclear magnetic resonance techniques.
Seok, Junhee; Seon Kang, Yeong
2015-01-01
Mutual information, a general measure of the relatedness between two random variables, has been actively used in the analysis of biomedical data. The mutual information between two discrete variables is conventionally calculated by their joint probabilities estimated from the frequency of observed samples in each combination of variable categories. However, this conventional approach is no longer efficient for discrete variables with many categories, which can be easily found in large-scale biomedical data such as diagnosis codes, drug compounds, and genotypes. Here, we propose a method to provide stable estimations for the mutual information between discrete variables with many categories. Simulation studies showed that the proposed method reduced the estimation errors by 45 folds and improved the correlation coefficients with true values by 99 folds, compared with the conventional calculation of mutual information. The proposed method was also demonstrated through a case study for diagnostic data in electronic health records. This method is expected to be useful in the analysis of various biomedical data with discrete variables. PMID:26046461
Pump-Probe Spectroscopy Using the Hadamard Transform.
Beddard, Godfrey S; Yorke, Briony A
2016-08-01
A new method of performing pump-probe experiments is proposed and experimentally demonstrated by a proof of concept on the millisecond scale. The idea behind this method is to measure the total probe intensity arising from several time points as a group, instead of measuring each time separately. These measurements are multiplexes that are then transformed into the true signal via multiplication with a binary Hadamard S matrix. Each group of probe pulses is determined by using the pattern of a row of the Hadamard S matrix and the experiment is completed by rotating this pattern by one step for each sample excitation until the original pattern is again produced. Thus to measure n time points, n excitation events are needed and n probe patterns each taken from the n × n S matrix. The time resolution is determined by the shortest time between the probe pulses. In principle, this method could be used over all timescales, instead of the conventional pump-probe method which uses delay lines for picosecond and faster time resolution, or fast detectors and oscilloscopes on longer timescales. This new method is particularly suitable for situations where the probe intensity is weak and/or the detector is noisy. When the detector is noisy, there is in principle a signal to noise advantage over conventional pump-probe methods. © The Author(s) 2016.
Non-contact measurements of creep properties of niobium at 1985 °C
NASA Astrophysics Data System (ADS)
Lee, J.; Wall, J. J.; Rogers, J. R.; Rathz, T. J.; Choo, H.; Liaw, P. K.; Hyers, R. W.
2015-01-01
The stress exponent in the power-law creep of niobium at 1985 °C was measured by a non-contact technique using an electrostatic levitation facility at NASA MSFC. This method employs a distribution of stress to allow the stress exponent to be determined from each test, rather than from the curve fit through measurements from multiple samples that is required by conventional methods. The sample is deformed by the centripetal acceleration from the rapid rotation, and the deformed shapes are analyzed to determine the strain. Based on a mathematical proof, which revealed that the stress exponent was determined uniquely by the ratio of the polar to equatorial strains, a series of finite-element analyses with the models of different stress exponents were also performed to determine the stress exponent corresponding to the measured strain ratio. The stress exponent from the ESL experiment showed a good agreement with those from the literature and the conventional creep test.
Brew, Christopher J; Simpson, Philip M; Whitehouse, Sarah L; Donnelly, William; Crawford, Ross W; Hubble, Matthew J W
2012-04-01
We describe a scaling method for templating digital radiographs using conventional acetate templates independent of template magnification without the need for a calibration marker. The mean magnification factor for the radiology department was determined (119.8%; range, 117%-123.4%). This fixed magnification factor was used to scale the radiographs by the method described. Thirty-two femoral heads on postoperative total hip arthroplasty radiographs were then measured and compared with the actual size. The mean absolute accuracy was within 0.5% of actual head size (range, 0%-3%) with a mean absolute difference of 0.16 mm (range, 0-1 mm; SD, 0.26 mm). Intraclass correlation coefficient showed excellent reliability for both interobserver and intraobserver measurements with intraclass correlation coefficient scores of 0.993 (95% CI, 0.988-0.996) for interobserver measurements and intraobserver measurements ranging between 0.990 and 0.993 (95% CI, 0.980-0.997). Crown Copyright © 2012. Published by Elsevier Inc. All rights reserved.
Vision-based system identification technique for building structures using a motion capture system
NASA Astrophysics Data System (ADS)
Oh, Byung Kwan; Hwang, Jin Woo; Kim, Yousok; Cho, Tongjun; Park, Hyo Seon
2015-11-01
This paper presents a new vision-based system identification (SI) technique for building structures by using a motion capture system (MCS). The MCS with outstanding capabilities for dynamic response measurements can provide gage-free measurements of vibrations through the convenient installation of multiple markers. In this technique, from the dynamic displacement responses measured by MCS, the dynamic characteristics (natural frequency, mode shape, and damping ratio) of building structures are extracted after the processes of converting the displacement from MCS to acceleration and conducting SI by frequency domain decomposition. A free vibration experiment on a three-story shear frame was conducted to validate the proposed technique. The SI results from the conventional accelerometer-based method were compared with those from the proposed technique and showed good agreement, which confirms the validity and applicability of the proposed vision-based SI technique for building structures. Furthermore, SI directly employing MCS measured displacements to FDD was performed and showed identical results to those of conventional SI method.
Evaluation of volatile organic emissions from hazardous waste incinerators.
Sedman, R M; Esparza, J R
1991-01-01
Conventional methods of risk assessment typically employed to evaluate the impact of hazardous waste incinerators on public health must rely on somewhat speculative emissions estimates or on complicated and expensive sampling and analytical methods. The limited amount of toxicological information concerning many of the compounds detected in stack emissions also complicates the evaluation of the public health impacts of these facilities. An alternative approach aimed at evaluating the public health impacts associated with volatile organic stack emissions is presented that relies on a screening criterion to evaluate total stack hydrocarbon emissions. If the concentration of hydrocarbons in ambient air is below the screening criterion, volatile emissions from the incinerator are judged not to pose a significant threat to public health. Both the screening criterion and a conventional method of risk assessment were employed to evaluate the emissions from 20 incinerators. Use of the screening criterion always yielded a substantially greater estimate of risk than that derived by the conventional method. Since the use of the screening criterion always yielded estimates of risk that were greater than that determined by conventional methods and measuring total hydrocarbon emissions is a relatively simple analytical procedure, the use of the screening criterion would appear to facilitate the evaluation of operating hazardous waste incinerators. PMID:1954928
Cınar, Yasin; Cingü, Abdullah Kürşat; Türkcü, Fatih Mehmet; Çınar, Tuba; Yüksel, Harun; Özkurt, Zeynep Gürsel; Çaça, Ihsan
2014-09-01
To compare outcomes of accelerated and conventional corneal cross-linking (CXL) for progressive keratoconus (KC). Patients were divided into two groups as the accelerated CXL group and the conventional CXL group. The uncorrected distant visual acuity (UDVA), corrected distant visual acuity (CDVA), refraction and keratometric values were measured preoperatively and postoperatively. The data of the two groups were compared statistically. The mean UDVA and CDVA were better at the six month postoperative when compared with preoperative values in two groups. While change in UDVA and CDVA was statistically significant in the accelerated CXL group (p = 0.035 and p = 0.047, respectively), it did not reach statistical significance in the conventional CXL group (p = 0.184 and p = 0.113, respectively). The decrease in the mean corneal power (Km) and maximum keratometric value (Kmax) were statistically significant in both groups (p = 0.012 and 0.046, respectively in the accelerated CXL group, p = 0.012 and 0.041, respectively, in the conventional CXL group). There was no statistically significant difference in visual and refractive results between the two groups (p > 0.05). Refractive and visual results of the accelerated CXL method and the conventional CXL method for the treatment of KC in short time period were similar. The accelerated CXL method faster and provide high throughput of the patients.
Correlation and agreement of a digital and conventional method to measure arch parameters.
Nawi, Nes; Mohamed, Alizae Marny; Marizan Nor, Murshida; Ashar, Nor Atika
2018-01-01
The aim of the present study was to determine the overall reliability and validity of arch parameters measured digitally compared to conventional measurement. A sample of 111 plaster study models of Down syndrome (DS) patients were digitized using a blue light three-dimensional (3D) scanner. Digital and manual measurements of defined parameters were performed using Geomagic analysis software (Geomagic Studio 2014 software, 3D Systems, Rock Hill, SC, USA) on digital models and with a digital calliper (Tuten, Germany) on plaster study models. Both measurements were repeated twice to validate the intraexaminer reliability based on intraclass correlation coefficients (ICCs) using the independent t test and Pearson's correlation, respectively. The Bland-Altman method of analysis was used to evaluate the agreement of the measurement between the digital and plaster models. No statistically significant differences (p > 0.05) were found between the manual and digital methods when measuring the arch width, arch length, and space analysis. In addition, all parameters showed a significant correlation coefficient (r ≥ 0.972; p < 0.01) between all digital and manual measurements. Furthermore, a positive agreement between digital and manual measurements of the arch width (90-96%), arch length and space analysis (95-99%) were also distinguished using the Bland-Altman method. These results demonstrate that 3D blue light scanning and measurement software are able to precisely produce 3D digital model and measure arch width, arch length, and space analysis. The 3D digital model is valid to be used in various clinical applications.
NASA Astrophysics Data System (ADS)
Fang, Jingyu; Xu, Haisong; Wang, Zhehong; Wu, Xiaomin
2016-05-01
With colorimetric characterization, digital cameras can be used as image-based tristimulus colorimeters for color communication. In order to overcome the restriction of fixed capture settings adopted in the conventional colorimetric characterization procedures, a novel method was proposed considering capture settings. The method calculating colorimetric value of the measured image contains five main steps, including conversion from RGB values to equivalent ones of training settings through factors based on imaging system model so as to build the bridge between different settings, scaling factors involved in preparation steps for transformation mapping to avoid errors resulted from nonlinearity of polynomial mapping for different ranges of illumination levels. The experiment results indicate that the prediction error of the proposed method, which was measured by CIELAB color difference formula, reaches less than 2 CIELAB units under different illumination levels and different correlated color temperatures. This prediction accuracy for different capture settings remains the same level as the conventional method for particular lighting condition.
Measurement of Creep Properties of Ultra-High-Temperature Materials by a Novel Non-Contact Technique
NASA Technical Reports Server (NTRS)
Hyers, Robert W.; Lee, Jonghyun; Rogers, Jan R.; Liaw, Peter K.
2007-01-01
A non-contact technique for measuring the creep properties of materials has been developed and validated as part of a collaboration among the University of Massachusetts, NASA Marshall Space Flight Center Electrostatic Levitation Facility (ESL), and the University of Tennessee. This novel method has several advantages over conventional creep testing. The sample is deformed by the centripetal acceleration from the rapid rotation, and the deformed shapes are analyzed to determine the strain. Since there is no contact with grips, there is no theoretical maximum temperature and no concern about chemical compatibility. Materials may be tested at the service temperature even for extreme environments such as rocket nozzles, or above the service temperature for accelerated testing of materials for applications such as jet engines or turbopumps for liquid-fueled engines. The creep measurements have been demonstrated to 2400 C with niobium, while the test facility, the NASA MSFC ESL, has processed materials up to 3400 C. Furthermore, the ESL creep method employs a distribution of stress to determine the stress exponent from a single test, versus the many tests required by conventional methods. Determination of the stress exponent from the ESL creep tests requires very precise measurement of the surface shape of the deformed sample for comparison to deformations predicted by finite element models for different stress exponents. An error analysis shows that the stress exponent can be determined to about 1% accuracy with the current methods and apparatus. The creep properties of single-crystal niobium at 1985 C showed excellent agreement with conventional tests performed according to ASTM Standard E-139. Tests on other metals, ceramics, and composites relevant to rocket propulsion and turbine engines are underway.
Echo movement and evolution from real-time processing.
NASA Technical Reports Server (NTRS)
Schaffner, M. R.
1972-01-01
Preliminary experimental data on the effectiveness of conventional radars in measuring the movement and evolution of meteorological echoes when the radar is connected to a programmable real-time processor are examined. In the processor programming is accomplished by conceiving abstract machines which constitute the actual programs used in the methods employed. An analysis of these methods, such as the center of gravity method, the contour-displacement method, the method of slope, the cross-section method, the contour crosscorrelation method, the method of echo evolution at each point, and three-dimensional measurements, shows that the motions deduced from them may differ notably (since each method determines different quantities) but the plurality of measurement may give additional information on the characteristics of the precipitation.
NASA Technical Reports Server (NTRS)
Tanton, George; Kesmodel, Roy; Burden, Judy; Su, Ching-Hua; Cobb, Sharon D.; Lehoczky, S. L.
2000-01-01
HgZnSe and HgZnTe are electronic materials of interest for potential IR detector and focal plane array applications due to their improved strength and compositional stability over HgCdTe, but they are difficult to grow on Earth and to fully characterize. Conventional contact methods of characterization, such as Hall and van der Paw, although adequate for many situations are typically labor intensive and not entirely suitable where only very small samples are available. To adequately characterize and compare properties of electronic materials grown in low earth orbit with those grown on Earth, innovative techniques are needed that complement existing methods. This paper describes the implementation and test results of a unique non-contact method of characterizing uniformity, mobility, and carrier concentration together with results from conventional methods applied to HgZnSe and HgZnTe. The innovative method has advantages over conventional contact methods since it circumvents problems of possible contamination from alloying electrical contacts to a sample and also has the capability to map a sample. Non- destructive mapping, the determination of the carrier concentration and mobility at each place on a sample, provides a means to quantitatively compare, at high spatial resolution, effects of microgravity on electronic properties and uniformity of electronic materials grown in low-Earth orbit with Earth grown materials. The mapping technique described here uses a 1mm diameter polarized beam of radiation to probe the sample. Activation of a magnetic field, in which the sample is placed, causes the plane of polarization of the probe beam to rotate. This Faraday rotation is a function of the free carrier concentration and the band parameters of the material. Maps of carrier concentration, mobility, and transmission generated from measurements of the Faraday rotation angles over the temperature range from 300K to 77K will be presented. New information on band parameters, obtained by combining results from conventional Hall measurements of the free carrier concentration with Faraday rotation measurements, will also be presented. One example of how this type of information was derived is illustrated in the following figure which shows Faraday rotation vs wavelength modeled for Hg(l-x)ZnxSe at a temperature of 300K and x=0.07. The plasma contribution, total Faraday rotation, and interband contribution to the Faraday rotation, are designated in the Figure as del(p), FR tot, and del(i) respectively. Experimentally measured values of FR tot, each indicated by + , agree acceptably well with the model at the probe wavelength of 10.6 microns. The model shows that at the probe wavelength, practically all the rotation is due to the plasma component, which can be expressed as delta(sub p)= 2pi(e(sup 3))NBL/c(sup 2)nm*(sup 2) omega(sup 2). In this equation, delta(sub p) is the rotation angle due to the free carrier plasma, N is the free carrier concentration, B the magnetic field strength, L the thickness of the sample, n the index of refraction, omega the probe radiation frequency, c the speed of light, e the electron charge, and m* the effective mass. A measurement of N by conventional techniques, combined with a measurement of the Faraday rotation angle allows m* to be accurately determined since it is an inverse square function.
2011-01-01
Background Hypothermia in burns is common and increases morbidity and mortality. Several methods are available to reach and maintain normal core body temperature, but have not yet been evaluated in critical care for burned patients. Our unit's ordinary technique for controlling body temperature (Bair Hugger®+ radiator ceiling + bed warmer + Hotline®) has many drawbacks e.g.; slow and the working environment is hampered. The aim of this study was to compare our ordinary heating technique with newly-developed methods: the Allon™2001 Thermowrap (a temperature regulating water-mattress), and Warmcloud (a temperature regulating air-mattress). Methods Ten consecutive burned patients (> 20% total burned surface area and a core temperature < 36.0°C) were included in this prospective, randomised, comparative study. Patients were randomly exposed to 3 heating methods. Each treatment/measuring-cycle lasted for 6 hours. Each heating method was assessed for 2 hours according to a randomised timetable. Core temperature was measured using an indwelling (bladder) thermistor. Paired t-tests were used to assess the significance of differences between the treatments within the patients. ANOVA was used to assess the differences in temperature from the first to the last measurement among all treatments. Three-way ANOVA with the Tukey HSD post hoc test and a repeated measures ANOVA was used in the same manner, but included information about patients and treatment/measuring-cycles to control for potential confounding. Data are presented as mean (SD) and (range). Probabilities of less than 0.05 were accepted as significant. Results The mean increase, 1.4 (SD 0.6°C; range 0.6-2.6°C) in core temperature/treatment/measuring-cycle highly significantly favoured the Allon™2001 Thermowrap in contrast to the conventional method 0.2 (0.6)°C (range -1.2 to 1.5°C) and the Warmcloud 0.3 (0.4)°C (range -0.4 to 0.9°C). The procedures for using the Allon™2001 Thermowrap were experienced to be more comfortable and straightforward than the conventional method or the Warmcloud. Conclusions The Allon™2001 Thermowrap was more effective than the Warmcloud or the conventional method in controlling patients' temperatures. PMID:21736717
Zhang, Shangjian; Zou, Xinhai; Wang, Heng; Zhang, Yali; Lu, Rongguo; Liu, Yong
2015-10-15
A calibration-free electrical method is proposed for measuring the absolute frequency response of directly modulated semiconductor lasers based on additional modulation. The method achieves the electrical domain measurement of the modulation index of directly modulated lasers without the need for correcting the responsivity fluctuation in the photodetection. Moreover, it doubles measuring frequency range by setting a specific frequency relationship between the direct and additional modulation. Both the absolute and relative frequency response of semiconductor lasers are experimentally measured from the electrical spectrum of the twice-modulated optical signal, and the measured results are compared to those obtained with conventional methods to check the consistency. The proposed method provides calibration-free and accurate measurement for high-speed semiconductor lasers with high-resolution electrical spectrum analysis.
Lee, Ki Song; Choe, Young Chan; Park, Sung Hee
2015-10-01
This study examined the structural variables affecting the environmental effects of organic farming compared to those of conventional farming. A meta-analysis based on 107 studies and 360 observations published from 1977 to 2012 compared energy efficiency (EE) and greenhouse gas emissions (GHGE) for organic and conventional farming. The meta-analysis systematically analyzed the results of earlier comparative studies and used logistic regression to identify the structural variables that contributed to differences in the effects of organic and conventional farming on the environment. The statistical evidence identified characteristics that differentiated the environmental effects of organic and conventional farming, which is controversial. The results indicated that data sources, sample size and product type significantly affected EE, whereas product type, cropping pattern and measurement unit significantly affected the GHGE of organic farming compared to conventional farming. Superior effects of organic farming on the environment were more likely to appear for larger samples, primary data rather than secondary data, monocropping rather than multicropping, and crops other than fruits and vegetables. The environmental effects of organic farming were not affected by the study period, geographic location, farm size, cropping pattern, or measurement method. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.
Dimensional Changes of Acrylic Resin Denture Bases: Conventional Versus Injection-Molding Technique
Gharechahi, Jafar; Asadzadeh, Nafiseh; Shahabian, Foad; Gharechahi, Maryam
2014-01-01
Objective: Acrylic resin denture bases undergo dimensional changes during polymerization. Injection molding techniques are reported to reduce these changes and thereby improve physical properties of denture bases. The aim of this study was to compare dimensional changes of specimens processed by conventional and injection-molding techniques. Materials and Methods: SR-Ivocap Triplex Hot resin was used for conventional pressure-packed and SR-Ivocap High Impact was used for injection-molding techniques. After processing, all the specimens were stored in distilled water at room temperature until measured. For dimensional accuracy evaluation, measurements were recorded at 24-hour, 48-hour and 12-day intervals using a digital caliper with an accuracy of 0.01 mm. Statistical analysis was carried out by SPSS (SPSS Inc., Chicago, IL, USA) using t-test and repeated-measures ANOVA. Statistical significance was defined at P<0.05. Results: After each water storage period, the acrylic specimens produced by injection exhibited less dimensional changes compared to those produced by the conventional technique. Curing shrinkage was compensated by water sorption with an increase in water storage time decreasing dimensional changes. Conclusion: Within the limitations of this study, dimensional changes of acrylic resin specimens were influenced by the molding technique used and SR-Ivocap injection procedure exhibited higher dimensional accuracy compared to conventional molding. PMID:25584050
NASA Technical Reports Server (NTRS)
Shimizu, H.; Kobayasi, T.; Inaba, H.
1979-01-01
A method of remote measurement of the particle size and density distribution of water droplets was developed. In this method, the size of droplets is measured from the Mie scattering parameter which is defined as the total-to-backscattering ratio of the laser beam. The water density distribution is obtained by a combination of the Mie scattering parameter and the extinction coefficient of the laser beam. This method was examined experimentally for the mist generated by an ultrasonic mist generator and applied to clouds containing rain and snow. Compared with the conventional sampling method, the present method has advantages of remote measurement capability and improvement in accuracy.
Comparison of Two Acoustic Waveguide Methods for Determining Liner Impedance
NASA Technical Reports Server (NTRS)
Jones, Michael G.; Watson, Willie R.; Tracy, Maureen B.; Parrott, Tony L.
2001-01-01
Acoustic measurements taken in a flow impedance tube are used to assess the relative accuracy of two waveguide methods for impedance eduction in the presence of grazing flow. The aeroacoustic environment is assumed to contain forward and backward-traveling acoustic waves, consisting of multiple modes, and uniform mean flow. Both methods require a measurement of the complex acoustic pressure profile over the length of the test liner. The Single Mode Method assumes that the sound pressure level and phase decay-rates of a single progressive mode can be extracted from this measured complex acoustic pressure profile. No a priori assumptions are made in the Finite Element. Method regarding the modal or reflection content in the measured acoustic pressure profile. The integrity of each method is initially demonstrated by how well their no-flow impedances match those acquired in a normal incidence impedance tube. These tests were conducted using ceramic tubular and conventional perforate liners. Ceramic tubular liners were included because of their impedance insensitivity to mean flow effects. Conversely, the conventional perforate liner was included because its impedance is known to be sensitive to mean flow velocity effects. Excellent comparisons between impedance values educed with the two waveguide methods in the absence of mean flow and the corresponding values educed with the normal incident impedance tube were observed. The two methods are then compared for mean flow Mach numbers up to 0.5, and are shown to give consistent results for both types of test liners. The quality of the results indicates that the Single Mode Method should be used when the measured acoustic pressure profile is clearly dominated by a single progressive mode, and the Finite Element Method should be used for all other cases.
Code of Federal Regulations, 2014 CFR
2014-01-01
..., set the clock time to 3:23 and use the average power approach described in Section 5, Paragraph 5.3.2... circulates air internally or externally to the cooking product for a finite period of time after the end of... persist for an indefinite time. An indicator that only shows the user that the product is in the off...
Improved dewpoint-probe calibration
NASA Technical Reports Server (NTRS)
Stephenson, J. G.; Theodore, E. A.
1978-01-01
Relatively-simple pressure-control apparatus calibrates dewpoint probes considerably faster than conventional methods, with no loss of accuracy. Technique requires only pressure measurement at each calibration point and single absolute-humidity measurement at beginning of run. Several probes can be calibrated simultaneously and points can be checked above room temperature.
A line-scan hyperspectral Raman system for spatially offset Raman spectroscopy
USDA-ARS?s Scientific Manuscript database
Conventional methods of spatially offset Raman spectroscopy (SORS) typically use single-fiber optical measurement probes to slowly and incrementally collect a series of spatially offset point measurements moving away from the laser excitation point on the sample surface, or arrays of multiple fiber ...
A Background Noise Reduction Technique Using Adaptive Noise Cancellation for Microphone Arrays
NASA Technical Reports Server (NTRS)
Spalt, Taylor B.; Fuller, Christopher R.; Brooks, Thomas F.; Humphreys, William M., Jr.; Brooks, Thomas F.
2011-01-01
Background noise in wind tunnel environments poses a challenge to acoustic measurements due to possible low or negative Signal to Noise Ratios (SNRs) present in the testing environment. This paper overviews the application of time domain Adaptive Noise Cancellation (ANC) to microphone array signals with an intended application of background noise reduction in wind tunnels. An experiment was conducted to simulate background noise from a wind tunnel circuit measured by an out-of-flow microphone array in the tunnel test section. A reference microphone was used to acquire a background noise signal which interfered with the desired primary noise source signal at the array. The technique s efficacy was investigated using frequency spectra from the array microphones, array beamforming of the point source region, and subsequent deconvolution using the Deconvolution Approach for the Mapping of Acoustic Sources (DAMAS) algorithm. Comparisons were made with the conventional techniques for improving SNR of spectral and Cross-Spectral Matrix subtraction. The method was seen to recover the primary signal level in SNRs as low as -29 dB and outperform the conventional methods. A second processing approach using the center array microphone as the noise reference was investigated for more general applicability of the ANC technique. It outperformed the conventional methods at the -29 dB SNR but yielded less accurate results when coherence over the array dropped. This approach could possibly improve conventional testing methodology but must be investigated further under more realistic testing conditions.
Prasad, Rahul; Al-Keraif, Abdulaziz Abdullah; Kathuria, Nidhi; Gandhi, P V; Bhide, S V
2014-02-01
The purpose of this study was to determine whether the ringless casting and accelerated wax-elimination techniques can be combined to offer a cost-effective, clinically acceptable, and time-saving alternative for fabricating single unit castings in fixed prosthodontics. Sixty standardized wax copings were fabricated on a type IV stone replica of a stainless steel die. The wax patterns were divided into four groups. The first group was cast using the ringless investment technique and conventional wax-elimination method; the second group was cast using the ringless investment technique and accelerated wax-elimination method; the third group was cast using the conventional metal ring investment technique and conventional wax-elimination method; the fourth group was cast using the metal ring investment technique and accelerated wax-elimination method. The vertical marginal gap was measured at four sites per specimen, using a digital optical microscope at 100× magnification. The results were analyzed using two-way ANOVA to determine statistical significance. The vertical marginal gaps of castings fabricated using the ringless technique (76.98 ± 7.59 μm) were significantly less (p < 0.05) than those castings fabricated using the conventional metal ring technique (138.44 ± 28.59 μm); however, the vertical marginal gaps of the conventional (102.63 ± 36.12 μm) and accelerated wax-elimination (112.79 ± 38.34 μm) castings were not statistically significant (p > 0.05). The ringless investment technique can produce castings with higher accuracy and can be favorably combined with the accelerated wax-elimination method as a vital alternative to the time-consuming conventional technique of casting restorations in fixed prosthodontics. © 2013 by the American College of Prosthodontists.
FSILP: fuzzy-stochastic-interval linear programming for supporting municipal solid waste management.
Li, Pu; Chen, Bing
2011-04-01
Although many studies on municipal solid waste management (MSW management) were conducted under uncertain conditions of fuzzy, stochastic, and interval coexistence, the solution to the conventional linear programming problems of integrating fuzzy method with the other two was inefficient. In this study, a fuzzy-stochastic-interval linear programming (FSILP) method is developed by integrating Nguyen's method with conventional linear programming for supporting municipal solid waste management. The Nguyen's method was used to convert the fuzzy and fuzzy-stochastic linear programming problems into the conventional linear programs, by measuring the attainment values of fuzzy numbers and/or fuzzy random variables, as well as superiority and inferiority between triangular fuzzy numbers/triangular fuzzy-stochastic variables. The developed method can effectively tackle uncertainties described in terms of probability density functions, fuzzy membership functions, and discrete intervals. Moreover, the method can also improve upon the conventional interval fuzzy programming and two-stage stochastic programming approaches, with advantageous capabilities that are easily achieved with fewer constraints and significantly reduces consumption time. The developed model was applied to a case study of municipal solid waste management system in a city. The results indicated that reasonable solutions had been generated. The solution can help quantify the relationship between the change of system cost and the uncertainties, which could support further analysis of tradeoffs between the waste management cost and the system failure risk. Copyright © 2010 Elsevier Ltd. All rights reserved.
Yamada, Toru; Umeyama, Shinji; Matsuda, Keiji
2012-01-01
In conventional functional near-infrared spectroscopy (fNIRS), systemic physiological fluctuations evoked by a body's motion and psychophysiological changes often contaminate fNIRS signals. We propose a novel method for separating functional and systemic signals based on their hemodynamic differences. Considering their physiological origins, we assumed a negative and positive linear relationship between oxy- and deoxyhemoglobin changes of functional and systemic signals, respectively. Their coefficients are determined by an empirical procedure. The proposed method was compared to conventional and multi-distance NIRS. The results were as follows: (1) Nonfunctional tasks evoked substantial oxyhemoglobin changes, and comparatively smaller deoxyhemoglobin changes, in the same direction by conventional NIRS. The systemic components estimated by the proposed method were similar to the above finding. The estimated functional components were very small. (2) During finger-tapping tasks, laterality in the functional component was more distinctive using our proposed method than that by conventional fNIRS. The systemic component indicated task-evoked changes, regardless of the finger used to perform the task. (3) For all tasks, the functional components were highly coincident with signals estimated by multi-distance NIRS. These results strongly suggest that the functional component obtained by the proposed method originates in the cerebral cortical layer. We believe that the proposed method could improve the reliability of fNIRS measurements without any modification in commercially available instruments. PMID:23185590
Baek, Hyun Jae; Shin, JaeWook; Jin, Gunwoo; Cho, Jaegeol
2017-10-24
Photoplethysmographic signals are useful for heart rate variability analysis in practical ambulatory applications. While reducing the sampling rate of signals is an important consideration for modern wearable devices that enable 24/7 continuous monitoring, there have not been many studies that have investigated how to compensate the low timing resolution of low-sampling-rate signals for accurate heart rate variability analysis. In this study, we utilized the parabola approximation method and measured it against the conventional cubic spline interpolation method for the time, frequency, and nonlinear domain variables of heart rate variability. For each parameter, the intra-class correlation, standard error of measurement, Bland-Altman 95% limits of agreement and root mean squared relative error were presented. Also, elapsed time taken to compute each interpolation algorithm was investigated. The results indicated that parabola approximation is a simple, fast, and accurate algorithm-based method for compensating the low timing resolution of pulse beat intervals. In addition, the method showed comparable performance with the conventional cubic spline interpolation method. Even though the absolute value of the heart rate variability variables calculated using a signal sampled at 20 Hz were not exactly matched with those calculated using a reference signal sampled at 250 Hz, the parabola approximation method remains a good interpolation method for assessing trends in HRV measurements for low-power wearable applications.
NASA Technical Reports Server (NTRS)
Roth, Mark C. (Inventor); Smith, Russell W. (Inventor); Sikora, Joseph G. (Inventor); Rivers, H. Kevin (Inventor); Johnston, William M. (Inventor)
2016-01-01
An ultra-high temperature optical method incorporates speckle optics for sensing displacement and strain measurements well above conventional measurement techniques. High temperature pattern materials are used which can endure experimental high temperature environments while simultaneously having a minimum optical aberration. A purge medium is used to reduce or eliminate optical distortions and to reduce, and/or eliminate oxidation of the target specimen.
Measurement of the $B^-$ lifetime using a simulation free approach for trigger bias correction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aaltonen, T.; /Helsinki Inst. of Phys.; Adelman, J.
2010-04-01
The collection of a large number of B hadron decays to hadronic final states at the CDF II detector is possible due to the presence of a trigger that selects events based on track impact parameters. However, the nature of the selection requirements of the trigger introduces a large bias in the observed proper decay time distribution. A lifetime measurement must correct for this bias and the conventional approach has been to use a Monte Carlo simulation. The leading sources of systematic uncertainty in the conventional approach are due to differences between the data and the Monte Carlo simulation. Inmore » this paper they present an analytic method for bias correction without using simulation, thereby removing any uncertainty between data and simulation. This method is presented in the form of a measurement of the lifetime of the B{sup -} using the mode B{sup -} {yields} D{sup 0}{pi}{sup -}. The B{sup -} lifetime is measured as {tau}{sub B{sup -}} = 1.663 {+-} 0.023 {+-} 0.015 ps, where the first uncertainty is statistical and the second systematic. This new method results in a smaller systematic uncertainty in comparison to methods that use simulation to correct for the trigger bias.« less
Amador, Carolina; Chen, Shigao; Manduca, Armando; Greenleaf, James F.; Urban, Matthew W.
2017-01-01
Quantitative ultrasound elastography is increasingly being used in the assessment of chronic liver disease. Many studies have reported ranges of liver shear wave velocities values for healthy individuals and patients with different stages of liver fibrosis. Nonetheless, ongoing efforts exist to stabilize quantitative ultrasound elastography measurements by assessing factors that influence tissue shear wave velocity values, such as food intake, body mass index (BMI), ultrasound scanners, scanning protocols, ultrasound image quality, etc. Time-to-peak (TTP) methods have been routinely used to measure the shear wave velocity. However, there is still a need for methods that can provide robust shear wave velocity estimation in the presence of noisy motion data. The conventional TTP algorithm is limited to searching for the maximum motion in time profiles at different spatial locations. In this study, two modified shear wave speed estimation algorithms are proposed. The first method searches for the maximum motion in both space and time (spatiotemporal peak, STP); the second method applies an amplitude filter (spatiotemporal thresholding, STTH) to select points with motion amplitude higher than a threshold for shear wave group velocity estimation. The two proposed methods (STP and STTH) showed higher precision in shear wave velocity estimates compared to TTP in phantom. Moreover, in a cohort of 14 healthy subjects STP and STTH methods improved both the shear wave velocity measurement precision and the success rate of the measurement compared to conventional TTP. PMID:28092532
Amador Carrascal, Carolina; Chen, Shigao; Manduca, Armando; Greenleaf, James F; Urban, Matthew W
2017-04-01
Quantitative ultrasound elastography is increasingly being used in the assessment of chronic liver disease. Many studies have reported ranges of liver shear wave velocity values for healthy individuals and patients with different stages of liver fibrosis. Nonetheless, ongoing efforts exist to stabilize quantitative ultrasound elastography measurements by assessing factors that influence tissue shear wave velocity values, such as food intake, body mass index, ultrasound scanners, scanning protocols, and ultrasound image quality. Time-to-peak (TTP) methods have been routinely used to measure the shear wave velocity. However, there is still a need for methods that can provide robust shear wave velocity estimation in the presence of noisy motion data. The conventional TTP algorithm is limited to searching for the maximum motion in time profiles at different spatial locations. In this paper, two modified shear wave speed estimation algorithms are proposed. The first method searches for the maximum motion in both space and time [spatiotemporal peak (STP)]; the second method applies an amplitude filter [spatiotemporal thresholding (STTH)] to select points with motion amplitude higher than a threshold for shear wave group velocity estimation. The two proposed methods (STP and STTH) showed higher precision in shear wave velocity estimates compared with TTP in phantom. Moreover, in a cohort of 14 healthy subjects, STP and STTH methods improved both the shear wave velocity measurement precision and the success rate of the measurement compared with conventional TTP.
NASA Astrophysics Data System (ADS)
Matsutani, Natsuki; Lee, Heeyoung; Mizuno, Yosuke; Nakamura, Kentaro
2018-01-01
For Brillouin-sensing applications, we develop a method for mitigating the Fresnel reflection at the perfluorinated-polymer-optical-fiber ends by covering them with an amorphous fluoropolymer (CYTOP, fiber core material) dissolved in a volatile solvent. Unlike the conventional method using water, even after solvent evaporation, the CYTOP layer remains, resulting in long-term Fresnel reduction. In addition, the high viscosity of the CYTOP solution is a practical advantage. The effectiveness of this method is experimentally proved by Brillouin measurement.
ERIC Educational Resources Information Center
Smith, Alva Nelson
Two instructional methods were identified and compared to determine if any significant differences could be noted on three criterion measures. Measurements were conducted in the areas of achievement in biology, science attitudes, and critical thinking ability. Student ability was measured using pre-tests and the Scholastic Aptitude Test. Students…
Cat-eye effect target recognition with single-pixel detectors
NASA Astrophysics Data System (ADS)
Jian, Weijian; Li, Li; Zhang, Xiaoyue
2015-12-01
A prototype of cat-eye effect target recognition with single-pixel detectors is proposed. Based on the framework of compressive sensing, it is possible to recognize cat-eye effect targets by projecting a series of known random patterns and measuring the backscattered light with three single-pixel detectors in different locations. The prototype only requires simpler, less expensive detectors and extends well beyond the visible spectrum. The simulations are accomplished to evaluate the feasibility of the proposed prototype. We compared our results to that obtained from conventional cat-eye effect target recognition methods using area array sensor. The experimental results show that this method is feasible and superior to the conventional method in dynamic and complicated backgrounds.
Extinction of the soleus H reflex induced by conditioning stimulus given after test stimulus.
Hiraoka, Koichi
2002-02-01
To quantify the extinction of the soleus H reflex induced by a conditioning stimulus above the motor threshold to the post-tibial nerve applied 10-12 ms after a test stimulus (S2 method). Ten healthy subjects participated. The sizes of extinction induced by a test stimulus above the motor threshold (conventional method) and by the S2 method were measured. The size of the conditioned H reflex decreased as the intensity of the S2 conditioning stimulus increased. The decrease was less than that induced by the conventional method. The difference between the two methods correlated highly with the amount of orthodromically activated recurrent inhibition. When the S2 conditioning stimulus evoked an M wave that was roughly half of the maximum M wave, the decrease in the size of the conditioned H reflex depended on the size of the unconditioned H reflex. The S2 method allows us to observe extinction without changing the intensity of the test stimulus. The amount of the extinction depends partially on the size of the unconditioned H reflex. The difference in the sizes of extinction between the S2 and conventional methods should relate to recurrent inhibition.
Shamata, Awatif; Thompson, Tim
2018-05-10
Non-contact three-dimensional (3D) surface scanning has been applied in forensic medicine and has been shown to mitigate shortcoming of traditional documentation methods. The aim of this paper is to assess the efficiency of structured light 3D surface scanning in recording traumatic injuries of live cases in clinical forensic medicine. The work was conducted in Medico-Legal Centre in Benghazi, Libya. A structured light 3D surface scanner and ordinary digital camera with close-up lens were used to record the injuries and to have 3D and two-dimensional (2D) documents of the same traumas. Two different types of comparison were performed. Firstly, the 3D wound documents were compared to 2D documents based on subjective visual assessment. Additionally, 3D wound measurements were compared to conventional measurements and this was done to determine whether there was a statistical significant difference between them. For this, Friedman test was used. The study established that the 3D wound documents had extra features over the 2D documents. Moreover; the 3D scanning method was able to overcome the main deficiencies of the digital photography. No statistically significant difference was found between the 3D and conventional wound measurements. The Spearman's correlation established strong, positive correlation between the 3D and conventional measurement methods. Although, the 3D surface scanning of the injuries of the live subjects faced some difficulties, the 3D results were appreciated, the validity of 3D measurements based on the structured light 3D scanning was established. Further work will be achieved in forensic pathology to scan open injuries with depth information. Crown Copyright © 2018. Published by Elsevier Ltd. All rights reserved.
Qiu, Jin; Cheng, Jiajing; Wang, Qingying; Hua, Jie
2014-01-01
Background The aim of this study was to compare the effects of the levonorgestrel-releasing intrauterine system (LNG-IUS) with conventional medical treatment in reducing heavy menstrual bleeding. Material/Methods Relevant studies were identified by a search of MEDLINE, EMBASE, the Cochrane Central Register of Controlled Trials, and clinical trials registries (from inception to April 2014). Randomized controlled trials comparing the LNG-IUS with conventional medical treatment (mefenamic acid, tranexamic acid, norethindrone, medroxyprogesterone acetate injection, or combined oral contraceptive pills) in patients with menorrhagia were included. Results Eight randomized controlled trials that included 1170 women (LNG-IUS, n=562; conventional medical treatment, n=608) met inclusion criteria. The LNG-IUS was superior to conventional medical treatment in reducing menstrual blood loss (as measured by the alkaline hematin method or estimated by pictorial bleeding assessment chart scores). More women were satisfied with the LNG-IUS than with the use of conventional medical treatment (odds ratio [OR] 5.19, 95% confidence interval [CI] 2.73–9.86). Compared with conventional medical treatment, the LNG-IUS was associated with a lower rate of discontinuation (14.6% vs. 28.9%, OR 0.39, 95% CI 0.20–0.74) and fewer treatment failures (9.2% vs. 31.0%, OR 0.18, 95% CI 0.10–0.34). Furthermore, quality of life assessment favored LNG-IUS over conventional medical treatment, although use of various measurements limited our ability to pool the data for more powerful evidence. Serious adverse events were statistically comparable between treatments. Conclusions The LNG-IUS was the more effective first choice for management of menorrhagia compared with conventional medical treatment. Long-term, randomized trials are required to further investigate patient-based outcomes and evaluate the cost-effectiveness of the LNG-IUS and other medical treatments. PMID:25245843
Novel Digital Driving Method Using Dual Scan for Active Matrix Organic Light-Emitting Diode Displays
NASA Astrophysics Data System (ADS)
Jung, Myoung Hoon; Choi, Inho; Chung, Hoon-Ju; Kim, Ohyun
2008-11-01
A new digital driving method has been developed for low-temperature polycrystalline silicon, transistor-driven, active-matrix organic light-emitting diode (AM-OLED) displays by time-ratio gray-scale expression. This driving method effectively increases the emission ratio and the number of subfields by inserting another subfield set into nondisplay periods in the conventional digital driving method. By employing the proposed modified gravity center coding, this method can be used to effectively compensate for dynamic false contour noise. The operation and performance were verified by current measurement and image simulation. The simulation results using eight test images show that the proposed approach improves the average peak signal-to-noise ratio by 2.61 dB, and the emission ratio by 20.5%, compared with the conventional digital driving method.
Leaf epidermis images for robust identification of plants
da Silva, Núbia Rosa; Oliveira, Marcos William da Silva; Filho, Humberto Antunes de Almeida; Pinheiro, Luiz Felipe Souza; Rossatto, Davi Rodrigo; Kolb, Rosana Marta; Bruno, Odemir Martinez
2016-01-01
This paper proposes a methodology for plant analysis and identification based on extracting texture features from microscopic images of leaf epidermis. All the experiments were carried out using 32 plant species with 309 epidermal samples captured by an optical microscope coupled to a digital camera. The results of the computational methods using texture features were compared to the conventional approach, where quantitative measurements of stomatal traits (density, length and width) were manually obtained. Epidermis image classification using texture has achieved a success rate of over 96%, while success rate was around 60% for quantitative measurements taken manually. Furthermore, we verified the robustness of our method accounting for natural phenotypic plasticity of stomata, analysing samples from the same species grown in different environments. Texture methods were robust even when considering phenotypic plasticity of stomatal traits with a decrease of 20% in the success rate, as quantitative measurements proved to be fully sensitive with a decrease of 77%. Results from the comparison between the computational approach and the conventional quantitative measurements lead us to discover how computational systems are advantageous and promising in terms of solving problems related to Botany, such as species identification. PMID:27217018
Mizuno, Kentaro; Mikami, Yasuo; Hase, Hitoshi; Ikeda, Takumi; Nagae, Masateru; Tonomura, Hitoshi; Shirai, Toshiharu; Fujiwara, Hiroyoshi; Kubo, Toshikazu
2017-02-01
A technical note and retrospective study. The objectives were to describe a new method of drainage tube placement during microendoscopic spinal decompression, and compare the positioning and fluid discharge obtained with this method and the conventional method. To prevent postoperative epidural hematoma after microendoscopic decompression, a drainage tube must be placed in a suitable location. However, the narrow operative field makes precise control of the position of the tube technically difficult. We developed a method to reliably place the tube in the desired location. We use a Deschamps aneurysm needle with a slightly curved tip, which we call a drain passer. With the microendoscope in position, the drain passer, with a silk thread passed through the eye at the needle tip, is inserted percutaneously into the endoscopic field of view. The drainage tube is passed through the loop of silk thread protruding from the inside of the tubular retractor, and the thread is pulled to the outside, guiding the end of the drainage tube into the wound. This method was used in 23 cases at 44 intervertebral levels (drain passer group), and the conventional method in 20 cases at 32 intervertebral levels (conventional group). Postoperative plain radiographs were taken, and the amount of fluid discharge at postoperative hour 24 was measured. Drainage tube positioning was favorable at 43 intervertebral levels (97.7%) in the drain passer group and 26 intervertebral levels (81.3%) in the conventional group. Mean fluid discharge was 58.4±32.2 g in the drain passer group and 38.4±23.0 g in the conventional group. Positioning was significantly better and fluid discharge was significantly greater in the drain passer group. The results indicate that this method is a useful drainage tube placement technique for preventing postoperative epidural hematoma.
MEMS piezoresistive cantilever for the direct measurement of cardiomyocyte contractile force
NASA Astrophysics Data System (ADS)
Matsudaira, Kenei; Nguyen, Thanh-Vinh; Hirayama Shoji, Kayoko; Tsukagoshi, Takuya; Takahata, Tomoyuki; Shimoyama, Isao
2017-10-01
This paper reports on a method to directly measure the contractile forces of cardiomyocytes using MEMS (micro electro mechanical systems)-based force sensors. The fabricated sensor chip consists of piezoresistive cantilevers that can measure contractile forces with high frequency (several tens of kHz) and high sensing resolution (less than 0.1 nN). Moreover, the proposed method does not require a complex observation system or image processing, which are necessary in conventional optical-based methods. This paper describes the design, fabrication, and evaluation of the proposed device and demonstrates the direct measurements of contractile forces of cardiomyocytes using the fabricated device.
NASA Technical Reports Server (NTRS)
Greenberg, Harry
1941-01-01
The pitching and the yawing moments of a vee-type and a conventional type of tail surface were measured. The tests were made in the presence of a fuselage and a wing-fuselage combination in such a way as to determine the moments contributed by the tail surfaces. The results showed that the vee-type tail tested, with a dihedral angle of 35.3 deg, was about 71 percent as effective in pitch as the conventional tail and had a yawing-moment to pitching-moment ratio of 0.3. The conventional tail, the panels of which were all congruent to those of the vee-type tail, had a yawing-moment to pitching-moment ratio of 0.48. These ratios are in fair agreement with values calculated by methods shown in this and previous reports. The values of the measured moments were reduced from 15 to 25 percent of the calculated value by fuselage interference.
Sreemany, Arpita; Bera, Melinda Kumar; Sarkar, Anindya
2017-12-30
The elaborate sampling and analytical protocol associated with conventional dual-inlet isotope ratio mass spectrometry has long hindered high-resolution climate studies from biogenic accretionary carbonates. Laser-based on-line systems, in comparison, produce rapid data, but suffer from unresolvable matrix effects. It is, therefore, necessary to resolve these matrix effects to take advantage of the automated laser-based method. Two marine bivalve shells (one aragonite and one calcite) and one fish otolith (aragonite) were first analysed using a CO 2 laser ablation system attached to a continuous flow isotope ratio mass spectrometer under different experimental conditions (different laser power, sample untreated vs vacuum roasted). The shells and the otolith were then micro-drilled and the isotopic compositions of the powders were measured in a dual-inlet isotope ratio mass spectrometer following the conventional acid digestion method. The vacuum-roasted samples (both aragonite and calcite) produced mean isotopic ratios (with a reproducibility of ±0.2 ‰ for both δ 18 O and δ 13 C values) almost identical to the values obtained using the conventional acid digestion method. As the isotopic ratio of the acid digested samples fall within the analytical precision (±0.2 ‰) of the laser ablation system, this suggests the usefulness of the method for studying the biogenic accretionary carbonate matrix. When using laser-based continuous flow isotope ratio mass spectrometry for the high-resolution isotopic measurements of biogenic carbonates, the employment of a vacuum-roasting step will reduce the matrix effect. This method will be of immense help to geologists and sclerochronologists in exploring short-term changes in climatic parameters (e.g. seasonality) in geological times. Copyright © 2017 John Wiley & Sons, Ltd.
Combining remotely sensed and other measurements for hydrologic areal averages
NASA Technical Reports Server (NTRS)
Johnson, E. R.; Peck, E. L.; Keefer, T. N.
1982-01-01
A method is described for combining measurements of hydrologic variables of various sampling geometries and measurement accuracies to produce an estimated mean areal value over a watershed and a measure of the accuracy of the mean areal value. The method provides a means to integrate measurements from conventional hydrological networks and remote sensing. The resulting areal averages can be used to enhance a wide variety of hydrological applications including basin modeling. The correlation area method assigns weights to each available measurement (point, line, or areal) based on the area of the basin most accurately represented by the measurement. The statistical characteristics of the accuracy of the various measurement technologies and of the random fields of the hydrologic variables used in the study (water equivalent of the snow cover and soil moisture) required to implement the method are discussed.
Song, Zhixin; Xie, Baoyuan; Ma, Huaian; Zhang, Rui; Li, Pengfei; Liu, Lihong; Yue, Yuhong; Zhang, Jianping; Tong, Qing; Wang, Qingtao
2016-09-01
The level of glycated hemoglobin (HbA1c ) has been recognized as an important indicator of long-term glycemic control. However, the HbA1c measurement is not currently included as a diagnostic determinant in China. Current study aims to assess a candidate modified International Federation of Clinical Chemistry reference method for the forthcoming standardization of HbA1c measurements in China. The HbA1c concentration was measured using a modified high-performance liquid chromatography-electrospray ionization-mass spectrometry (HPLC-ESI-MS) method. The modified method replaces the propylcyanide column with a C18 reversed-phase column, which has a lower cost and is more commonly used in China, and uses 0.1% (26.5 mmol/l) formic acid instead of trifluoroacetic acid. Moreover, in order to minimize matrix interference and reduce the running time, a solid-phase extraction was employed. The discrepancies between HbA1c measurements using conventional methods and the HPLC-ESI-MS method were clarified in clinical samples from healthy people and diabetic patients. Corresponding samples were distributed to 89 hospitals in Beijing for external quality assessment. The linearity, reliability, and accuracy of the modified HPLC-ESI-MS method with a shortened running time of 6 min were successfully validated. Out of 89 hospitals evaluated, the relative biases of HbA1c concentrations were < 8% for 74 hospitals and < 5% for 60 hospitals. Compared with other conventional methods, HbA1c concentrations determined by HPLC methods were similar to the values obtained from the current HPLC-ESI-MS method. The HPLC-ESI-MS method represents an improvement over existing methods and provides a simple, stable, and rapid HbA1c measurement with strong signal intensities and reduced ion suppression. © 2015 Wiley Periodicals, Inc.
Self-Discrepancy: Comparisons of the Psychometric Properties of Three Instruments
ERIC Educational Resources Information Center
Watson, Neill; Bryan, Brandon C.; Thrash, Todd M.
2010-01-01
In 2 studies, the psychometric properties of 3 methods for measuring real-ideal and real ought self-discrepancies were compared: the idiographic Self-Concept Questionnaire--Personal Constructs, the nonidiographic Self-Concept Questionnaire--Conventional Constructs, and the content-free Abstract Measures. In the 1st study, 125 students at a…
Vojdani, M; Torabi, K; Farjood, E; Khaledi, AAR
2013-01-01
Statement of Problem: Metal-ceramic crowns are most commonly used as the complete coverage restorations in clinical daily use. Disadvantages of conventional hand-made wax-patterns introduce some alternative ways by means of CAD/CAM technologies. Purpose: This study compares the marginal and internal fit of copings cast from CAD/CAM and conventional fabricated wax-patterns. Materials and Method: Twenty-four standardized brass dies were prepared and randomly divided into 2 groups according to the wax-patterns fabrication method (CAD/CAM technique and conventional method) (n=12). All the wax-patterns were fabricated in a standard fashion by means of contour, thickness and internal relief (M1-M12: representative of CAD/CAM group, C1-C12: representative of conventional group). CAD/CAM milling machine (Cori TEC 340i; imes-icore GmbH, Eiterfeld, Germany) was used to fabricate the CAD/CAM group wax-patterns. The copings cast from 24 wax-patterns were cemented to the corresponding dies. For all the coping-die assemblies cross-sectional technique was used to evaluate the marginal and internal fit at 15 points. The Student’s t- test was used for statistical analysis (α=0.05). Results: The overall mean (SD) for absolute marginal discrepancy (AMD) was 254.46 (25.10) um for CAD/CAM group and 88.08(10.67) um for conventional group (control). The overall mean of internal gap total (IGT) was 110.77(5.92) um for CAD/CAM group and 76.90 (10.17) um for conventional group. The Student’s t-test revealed significant differences between 2 groups. Marginal and internal gaps were found to be significantly higher at all measured areas in CAD/CAM group than conventional group (p< 0.001). Conclusion: Within limitations of this study, conventional method of wax-pattern fabrication produced copings with significantly better marginal and internal fit than CAD/CAM (machine-milled) technique. All the factors for 2 groups were standardized except wax pattern fabrication technique, therefore, only the conventional group results in copings with clinically acceptable margins of less than 120um. PMID:24724133
Assessment of radiant temperature in a closed incubator.
Décima, Pauline; Stéphan-Blanchard, Erwan; Pelletier, Amandine; Ghyselen, Laurent; Delanaud, Stéphane; Dégrugilliers, Loïc; Telliez, Frédéric; Bach, Véronique; Libert, Jean-Pierre
2012-08-01
In closed incubators, radiative heat loss (R) which is assessed from the mean radiant temperature (Tr) accounts for 40-60% of the neonate's total heat loss. In the absence of a benchmark method to calculate Tr--often considered to be the same as the air incubator temperature-errors could have a considerable impact on the thermal management of neonates. We compared Tr using two conventional methods (measurement with a black-globe thermometer and a radiative "view factor" approach) and two methods based on nude thermal manikins (a simple, schematic design from Wheldon and a multisegment, anthropometric device developed in our laboratory). By taking the Tr estimations for each method, we calculated metabolic heat production values by partitional calorimetry and then compared them with the values calculated from V(O2) and V(CO2) measured in 13 preterm neonates. Comparisons between the calculated and measured metabolic heat production values showed that the two conventional methods and Wheldon's manikin underestimated R, whereas when using the anthropomorphic thermal manikin, the simulated versus clinical difference was not statistically significant. In conclusion, there is a need for a safety standard for measuring TR in a closed incubator. This standard should also make available estimating equations for all avenues of the neonate's heat exchange considering the metabolic heat production and the modifying influence of the thermal insulation provided by the diaper and by the mattress. Although thermal manikins appear to be particularly appropriate for measuring Tr, the current lack of standardized procedures limits their widespread use.
Unconventional Aqueous Humor Outflow: A Review
Johnson, Mark; McLaren, Jay W.; Overby, Darryl R.
2016-01-01
Aqueous humor flows out of the eye primarily through the conventional outflow pathway that includes the trabecular meshwork and Schlemm's canal. However, a fraction of aqueous humor passes through an alternative or ‘unconventional’ route that includes the ciliary muscle, supraciliary and suprachoroidal spaces. From there, unconventional outflow may drain through two pathways: a uveoscleral pathway where aqueous drains across the sclera to be resorbed by orbital vessels, and a uveovortex pathway where aqueous humor enters the choroid to drain through the vortex veins. We review the anatomy, physiology and pharmacology of these pathways. We also discuss methods to determine unconventional outflow rate, including direct techniques that use radioactive or fluorescent tracers recovered from tissues in the unconventional pathway and indirect methods that estimate unconventional outflow based on total outflow over a range of pressures. Indirect methods are subject to a number of assumptions and generally give poor agreement with tracer measurements. We review the variety of animal models that have been used to study conventional and unconventional outflow. The mouse appears to be a promising model because it captures several aspects of conventional and unconventional outflow dynamics common to humans, although questions remain regarding the magnitude of unconventional outflow in mice. Finally, we review future directions. There is a clear need to develop improved methods for measuring unconventional outflow in both animals and humans. PMID:26850315
Fractal analysis of GPS time series for early detection of disastrous seismic events
NASA Astrophysics Data System (ADS)
Filatov, Denis M.; Lyubushin, Alexey A.
2017-03-01
A new method of fractal analysis of time series for estimating the chaoticity of behaviour of open stochastic dynamical systems is developed. The method is a modification of the conventional detrended fluctuation analysis (DFA) technique. We start from analysing both methods from the physical point of view and demonstrate the difference between them which results in a higher accuracy of the new method compared to the conventional DFA. Then, applying the developed method to estimate the measure of chaoticity of a real dynamical system - the Earth's crust, we reveal that the latter exhibits two distinct mechanisms of transition to a critical state: while the first mechanism has already been known due to numerous studies of other dynamical systems, the second one is new and has not previously been described. Using GPS time series, we demonstrate efficiency of the developed method in identification of critical states of the Earth's crust. Finally we employ the method to solve a practically important task: we show how the developed measure of chaoticity can be used for early detection of disastrous seismic events and provide a detailed discussion of the numerical results, which are shown to be consistent with outcomes of other researches on the topic.
The femoral neck-shaft angle on plain radiographs: a systematic review.
Boese, Christoph Kolja; Dargel, Jens; Oppermann, Johannes; Eysel, Peer; Scheyerer, Max Joseph; Bredow, Jan; Lechler, Philipp
2016-01-01
The femoral neck-shaft angle (NSA) is an important measure for the assessment of the anatomy of the hip and planning of operations. Despite its common use, there remains disagreement concerning the method of measurement and the correction of hip rotation and femoral version of the projected NSA on conventional radiographs. We addressed the following questions: (1) What are the reported values for NSA in normal adult subjects and in osteoarthritis? (2) Is there a difference between non-corrected and rotation-corrected measurements? (3) Which methods are used for measuring the NSA on plain radiographs? (4) What could be learned from an analysis of the intra- and interobserver reliability? A systematic literature search was performed including 26 publications reporting the measurement of the NSA on conventional radiographs. The mean NSA of healthy adults (5,089 hips) was 128.8° (98-180°) and 131.5° (115-155°) in patients with osteoarthritis (1230 hips). The mean NSA was 128.5° (127-130.5°) for the rotation-corrected and 129.5° (119.6-151°) for the non-corrected measurements. Our data showed a high variance of the reported neck-shaft angles. Notably, we identified the inconsistency of the published methods of measurement as a central issue. The reported effect of rotation-correction cannot be reliably verified.
NASA Astrophysics Data System (ADS)
Chauhan, H.; Krishna Mohan, B.
2014-11-01
The present study was undertaken with the objective to check effectiveness of spectral similarity measures to develop precise crop spectra from the collected hyperspectral field spectra. In Multispectral and Hyperspectral remote sensing, classification of pixels is obtained by statistical comparison (by means of spectral similarity) of known field or library spectra to unknown image spectra. Though these algorithms are readily used, little emphasis has been placed on use of various spectral similarity measures to select precise crop spectra from the set of field spectra. Conventionally crop spectra are developed after rejecting outliers based only on broad-spectrum analysis. Here a successful attempt has been made to develop precise crop spectra based on spectral similarity. As unevaluated data usage leads to uncertainty in the image classification, it is very crucial to evaluate the data. Hence, notwithstanding the conventional method, the data precision has been performed effectively to serve the purpose of the present research work. The effectiveness of developed precise field spectra was evaluated by spectral discrimination measures and found higher discrimination values compared to spectra developed conventionally. Overall classification accuracy for the image classified by field spectra selected conventionally is 51.89% and 75.47% for the image classified by field spectra selected precisely based on spectral similarity. KHAT values are 0.37, 0.62 and Z values are 2.77, 9.59 for image classified using conventional and precise field spectra respectively. Reasonable higher classification accuracy, KHAT and Z values shows the possibility of a new approach for field spectra selection based on spectral similarity measure.
Alsharbaty, Mohammed Hussein M; Alikhasi, Marzieh; Zarrati, Simindokht; Shamshiri, Ahmed Reza
2018-02-09
To evaluate the accuracy of a digital implant impression technique using a TRIOS 3Shape intraoral scanner (IOS) compared to conventional implant impression techniques (pick-up and transfer) in clinical situations. Thirty-six patients who had two implants (Implantium, internal connection) ranging in diameter between 3.8 and 4.8 mm in posterior regions participated in this study after signing a consent form. Thirty-six reference models (RM) were fabricated by attaching two impression copings intraorally, splinted with autopolymerizing acrylic resin, verified by sectioning through the middle of the index, and rejoined again with freshly mixed autopolymerizing acrylic resin pattern (Pattern Resin) with the brush bead method. After that, the splinted assemblies were attached to implant analogs (DANSE) and impressed with type III dental stone (Gypsum Microstone) in standard plastic die lock trays. Thirty-six working casts were fabricated for each conventional impression technique (i.e., pick-up and transfer). Thirty-six digital impressions were made with a TRIOS 3Shape IOS. Eight of the digitally scanned files were damaged; 28 digital scan files were retrieved to STL format. A coordinate-measuring machine (CMM) was used to record linear displacement measurements (x, y, and z-coordinates), interimplant distances, and angular displacements for the RMs and conventionally fabricated working casts. CATIA 3D evaluation software was used to assess the digital STL files for the same variables as the CMM measurements. CMM measurements made on the RMs and conventionally fabricated working casts were compared with 3D software measurements made on the digitally scanned files. Data were statistically analyzed using the generalized estimating equation (GEE) with an exchangeable correlation matrix and linear method, followed by the Bonferroni method for pairwise comparisons (α = 0.05). The results showed significant differences between the pick-up and digital groups in all of the measured variables (p < 0.001). Concerning the transfer and digital groups, the results were statistically significant in angular displacement (p < 0.001), distance measurements (p = 0.01), and linear displacement (p = 0.03); however, between the pick-up and transfer groups, there was no statistical significance in all of the measured variables (interimplant distance deviation, linear displacement, and angular displacement deviations). According to the results of this study, the digital implant impression technique had the least accuracy. Based on the study outcomes, distance and angulation errors associated with the intraoral digital implant impressions were too large to fabricate well-fitting restorations for partially edentulous patients. The pick-up implant impression technique was the most accurate, and the transfer technique revealed comparable accuracy to it. © 2018 by the American College of Prosthodontists.
Zahid, Sarwar; Peeler, Crandall; Khan, Naheed; Davis, Joy; Mahmood, Mahdi; Heckenlively, John; Jayasundera, Thiran
2015-01-01
Purpose To develop a reliable and efficient digital method to quantify planimetric Goldmann visual field (GVF) data to monitor disease course and treatment responses in retinal degenerative diseases. Methods A novel method to digitally quantify GVF using Adobe Photoshop CS3 was developed for comparison to traditional digital planimetry (Placom 45C digital planimeter; EngineerSupply, Lynchburg, Virginia, USA). GVFs from 20 eyes from 10 patients with Stargardt disease were quantified to assess the difference between the two methods (a total of 230 measurements per method). This quantification approach was also applied to 13 patients with X-linked retinitis pigmentosa (XLRP) with mutations in RPGR. Results Overall, measurements using Adobe Photoshop were more rapidly performed than those using conventional planimetry. Photoshop measurements also exhibited less inter- and intra-observer variability. GVF areas for the I4e isopter in patients with the same mutation in RPGR who were nearby in age had similar qualitative and quantitative areas. Conclusions Quantification of GVF using Adobe Photoshop is quicker, more reliable, and less-user dependent than conventional digital planimetry. It will be a useful tool for both retrospective and prospective studies of disease course as well as for monitoring treatment response in clinical trials for retinal degenerative diseases. PMID:24664690
Fujiwara, Yasuhiro; Maruyama, Hirotoshi; Toyomaru, Kanako; Nishizaka, Yuri; Fukamatsu, Masahiro
2018-06-01
Magnetic resonance imaging (MRI) is widely used to detect carotid atherosclerotic plaques. Although it is important to evaluate vulnerable carotid plaques containing lipids and intra-plaque hemorrhages (IPHs) using T 1 -weighted images, the image contrast changes depending on the imaging settings. Moreover, to distinguish between a thrombus and a hemorrhage, it is useful to evaluate the iron content of the plaque using both T 1 -weighted and T 2 *-weighted images. Therefore, a quantitative evaluation of carotid atherosclerotic plaques using T 1 and T 2 * values may be necessary for the accurate evaluation of plaque components. The purpose of this study was to determine whether the multi-echo phase-sensitive inversion recovery (mPSIR) sequence can improve T 1 contrast while simultaneously providing accurate T 1 and T 2 * values of an IPH. T 1 and T 2 * values measured using mPSIR were compared to values from conventional methods in phantom and in vivo studies. In the phantom study, the T 1 and T 2 * values estimated using mPSIR were linearly correlated with those of conventional methods. In the in vivo study, mPSIR demonstrated higher T 1 contrast between the IPH phantom and sternocleidomastoid muscle than the conventional method. Moreover, the T 1 and T 2 * values of the blood vessel wall and sternocleidomastoid muscle estimated using mPSIR were correlated with values measured by conventional methods and with values reported previously. The mPSIR sequence improved T 1 contrast while simultaneously providing accurate T 1 and T 2 * values of the neck region. Although further study is required to evaluate the clinical utility, mPSIR may improve carotid atherosclerotic plaque detection and provide detailed information about plaque components.
Evaluation of thermal cooling mechanisms for laser application to teeth.
Miserendino, L J; Abt, E; Wigdor, H; Miserendino, C A
1993-01-01
Experimental cooling methods for the prevention of thermal damage to dental pulp during laser application to teeth were compared to conventional treatment in vitro. Pulp temperature measurements were made via electrical thermistors implanted within the pulp chambers of extracted human third molar teeth. Experimental treatments consisted of lasing without cooling, lasing with cooling, laser pulsing, and high-speed dental rotary drilling. Comparisons of pulp temperature elevation measurements for each group demonstrated that cooling by an air and water spray during lasing significantly reduced heat transfer to dental pulp. Laser exposures followed by an air and water spray resulted in pulp temperature changes comparable to conventional treatment by drilling. Cooling by an air water spray with evacuation appears to be an effective method for the prevention of thermal damage to vital teeth following laser exposure.
NASA Astrophysics Data System (ADS)
Arenas, Gustavo; Noriega, Sergio; Vallo, Claudia; Duchowicz, Ricardo
2007-03-01
A fiber optic sensing method based on a Fizeau-type interferometric scheme was employed for monitoring linear polymerization shrinkage in dental restoratives. This technique offers several advantages over the conventional methods of measuring polymerization contraction. This simple, compact, non-invasive and self-calibrating system competes with both conventional and other high-resolution bulk interferometric techniques. In this work, an analysis of the quality of interference signal and fringes visibility was performed in order to characterize their resolution and application range. The measurements of percent linear contraction as a function of the sample thickness were carried out in this study on two dental composites: Filtek P60 (3M ESPE) Posterior Restorer and Filtek Z250 (3M ESPE) Universal Restorer. The results were discussed with respect to others obtained employing alternative techniques.
Catalytic activity of Ru-Sn/Al2O3 in reduction reaction of pollutant 4-Nitrophenol
NASA Astrophysics Data System (ADS)
Rini, A. S.; Radiman, S.; Yarmo, M. A.
2018-03-01
Ru-Sn/Al2O3 bimetallic nanocatalysts have been synthesized by using conventional and microwave impregnation methods. Structure and morphology of the samples were characterized using XRD, XPS, and TEM. XRD and XPS measurement have confirmed the presence of Ru and Sn in the samples. According to TEM results, the morphology of the catalyst strongly depends on the preparation route and stabilizing agent (i.e. PVP). The sample with PVP (polyvinylpyrrolidone) has better nanoparticles distribution over the support. A sample prepared by conventional method has an agglomeration of nanoparticles on the support. Catalytic activities of both samples were examined in the reduction reaction of pollutant, i.e. 4-nitrophenol. Catalytic examination showed that reaction rate of 4-nitrophenol reduction by using microwave-assisted sample has improved 3.5 times faster than conventional impregnation sample.
Targeted Single-Shot Methods for Diffusion-Weighted Imaging in the Kidneys
Jin, Ning; Deng, Jie; Zhang, Longjiang; Zhang, Zhuoli; Lu, Guangming; Omary, Reed A.; Larson, Andrew C.
2011-01-01
Purpose To investigate the feasibility of combining the inner-volume-imaging (IVI) technique with single-shot diffusion-weighted (DW) spin-echo echo-planar imaging (SE-EPI) and DW-SPLICE (split acquisition of fast spin-echo) sequences for renal DW imaging. Materials and Methods Renal DW imaging was performed in 10 healthy volunteers using single-shot DW-SE-EPI, DW-SPLICE, targeted-DW-SE-EPI and targeted-DW-SPLICE. We compared the quantitative diffusion measurement accuracy and image quality of these targeted-DW-SE-EPI and targeted DW-SPLICE methods with conventional full FOV DW-SE-EPI and DW-SPLICE measurements in phantoms and normal volunteers. Results Compared with full FOV DW-SE-EPI and DW-SPLICE methods, targeted-DW-SE-EPI and targeted-DW-SPLICE approaches produced images of superior overall quality with fewer artifacts, less distortion and reduced spatial blurring in both phantom and volunteer studies. The ADC values measured with each of the four methods were similar and in agreement with previously published data. There were no statistically significant differences between the ADC values and intra-voxel incoherent motion (IVIM) measurements in the kidney cortex and medulla using single-shot DW-SE-EPI, targeted-DW-EPI and targeted-DW-SPLICE (p > 0.05). Conclusion Compared with full-FOV DW imaging methods, targeted-DW-SE-EPI and targeted-DW-SPLICE techniques reduced image distortion and artifacts observed in the single-shot DW-SE-EPI images, reduced blurring in DW-SPLICE images and produced comparable quantitative DW and IVIM measurements to those produced with conventional full-FOV approaches. PMID:21591023
Schneiderman, Eva; Colón, Ellen; White, Donald J; St John, Samuel
2015-01-01
The purpose of this study was to compare the abrasivity of commercial dentifrices by two techniques: the conventional gold standard radiotracer-based Radioactive Dentin Abrasivity (RDA) method; and a newly validated technique based on V8 brushing that included a profilometry-based evaluation of dentin wear. This profilometry-based method is referred to as RDA-Profilometry Equivalent, or RDA-PE. A total of 36 dentifrices were sourced from four global dentifrice markets (Asia Pacific [including China], Europe, Latin America, and North America) and tested blindly using both the standard radiotracer (RDA) method and the new profilometry method (RDA-PE), taking care to follow specific details related to specimen preparation and treatment. Commercial dentifrices tested exhibited a wide range of abrasivity, with virtually all falling well under the industry accepted upper limit of 250; that is, 2.5 times the level of abrasion measured using an ISO 11609 abrasivity reference calcium pyrophosphate as the reference control. RDA and RDA-PE comparisons were linear across the entire range of abrasivity (r2 = 0.7102) and both measures exhibited similar reproducibility with replicate assessments. RDA-PE assessments were not just linearly correlated, but were also proportional to conventional RDA measures. The linearity and proportionality of the results of the current study support that both methods (RDA or RDA-PE) provide similar results and justify a rationale for making the upper abrasivity limit of 250 apply to both RDA and RDA-PE.
Kim, Jae-Hong; Kim, Ki-Baek; Kim, Woong-Chul; Kim, Ji-Hwan; Kim, Hae-Young
2014-03-01
This study aimed to evaluate the accuracy and precision of polyurethane (PUT) dental arch models fabricated using a three-dimensional (3D) subtractive rapid prototyping (RP) method with an intraoral scanning technique by comparing linear measurements obtained from PUT models and conventional plaster models. Ten plaster models were duplicated using a selected standard master model and conventional impression, and 10 PUT models were duplicated using the 3D subtractive RP technique with an oral scanner. Six linear measurements were evaluated in terms of x, y, and z-axes using a non-contact white light scanner. Accuracy was assessed using mean differences between two measurements, and precision was examined using four quantitative methods and the Bland-Altman graphical method. Repeatability was evaluated in terms of intra-examiner variability, and reproducibility was assessed in terms of inter-examiner and inter-method variability. The mean difference between plaster models and PUT models ranged from 0.07 mm to 0.33 mm. Relative measurement errors ranged from 2.2% to 7.6% and intraclass correlation coefficients ranged from 0.93 to 0.96, when comparing plaster models and PUT models. The Bland-Altman plot showed good agreement. The accuracy and precision of PUT dental models for evaluating the performance of oral scanner and subtractive RP technology was acceptable. Because of the recent improvements in block material and computerized numeric control milling machines, the subtractive RP method may be a good choice for dental arch models.
Use of radars to monitor stream discharge by noncontact methods
Costa, J.E.; Cheng, R.T.; Haeni, F.P.; Melcher, N.; Spicer, K.R.; Hayes, E.; Plant, W.; Hayes, K.; Teague, C.; Barrick, D.
2006-01-01
Conventional measurements of river flows are costly, time‐consuming, and frequently dangerous. This report evaluates the use of a continuous wave microwave radar, a monostatic UHF Doppler radar, a pulsed Doppler microwave radar, and a ground‐penetrating radar to measure river flows continuously over long periods and without touching the water with any instruments. The experiments duplicate the flow records from conventional stream gauging stations on the San Joaquin River in California and the Cowlitz River in Washington. The purpose of the experiments was to directly measure the parameters necessary to compute flow: surface velocity (converted to mean velocity) and cross‐sectional area, thereby avoiding the uncertainty, complexity, and cost of maintaining rating curves. River channel cross sections were measured by ground‐penetrating radar suspended above the river. River surface water velocity was obtained by Bragg scattering of microwave and UHF Doppler radars, and the surface velocity data were converted to mean velocity on the basis of detailed velocity profiles measured by current meters and hydroacoustic instruments. Experiments using these radars to acquire a continuous record of flow were conducted for 4 weeks on the San Joaquin River and for 16 weeks on the Cowlitz River. At the San Joaquin River the radar noncontact measurements produced discharges more than 20% higher than the other independent measurements in the early part of the experiment. After the first 3 days, the noncontact radar discharge measurements were within 5% of the rating values. On the Cowlitz River at Castle Rock, correlation coefficients between the USGS stream gauging station rating curve discharge and discharge computed from three different Doppler radar systems and GPR data over the 16 week experiment were 0.883, 0.969, and 0.992. Noncontact radar results were within a few percent of discharge values obtained by gauging station, current meter, and hydroacoustic methods. Time series of surface velocity obtained by different radars in the Cowlitz River experiment also show small‐amplitude pulsations not found in stage records that reflect tidal energy at the gauging station. Noncontact discharge measurements made during a flood on 30 January 2004 agreed with the rated discharge to within 5%. Measurement at both field sites confirm that lognormal velocity profiles exist for a wide range of flows in these rivers, and mean velocity is approximately 0.85 times measured surface velocity. Noncontact methods of flow measurement appear to (1) be as accurate as conventional methods, (2) obtain data when standard contact methods are dangerous or cannot be obtained, and (3) provide insight into flow dynamics not available from detailed stage records alone.
NASA Astrophysics Data System (ADS)
Lee, Dong-Sup; Cho, Dae-Seung; Kim, Kookhyun; Jeon, Jae-Jin; Jung, Woo-Jin; Kang, Myeng-Hwan; Kim, Jae-Ho
2015-01-01
Independent Component Analysis (ICA), one of the blind source separation methods, can be applied for extracting unknown source signals only from received signals. This is accomplished by finding statistical independence of signal mixtures and has been successfully applied to myriad fields such as medical science, image processing, and numerous others. Nevertheless, there are inherent problems that have been reported when using this technique: instability and invalid ordering of separated signals, particularly when using a conventional ICA technique in vibratory source signal identification of complex structures. In this study, a simple iterative algorithm of the conventional ICA has been proposed to mitigate these problems. The proposed method to extract more stable source signals having valid order includes an iterative and reordering process of extracted mixing matrix to reconstruct finally converged source signals, referring to the magnitudes of correlation coefficients between the intermediately separated signals and the signals measured on or nearby sources. In order to review the problems of the conventional ICA technique and to validate the proposed method, numerical analyses have been carried out for a virtual response model and a 30 m class submarine model. Moreover, in order to investigate applicability of the proposed method to real problem of complex structure, an experiment has been carried out for a scaled submarine mockup. The results show that the proposed method could resolve the inherent problems of a conventional ICA technique.
van Bochove, J A; van Amerongen, W E
2006-03-01
The aim is to investigate possible differences in discomfort during treatment with the atraumatic restorative treatment (ART) or the Conventional restorative method with and without local analgesia (LA). The study group consisted of 6 and 7 year old children with no dental experience (mean age 6.98, SD +/- 0.52) randomly divided into four treatment groups: Conventional method with and without LA and ART with and without LA. One or two proximal lesions in primary molars were treated. The heart rate and the behaviour (Venham) were measured. Statistical analysis was performed in SPSS version 10.0. In a first session 300 children were treated and 109 children for a second time in the same way as at the first visit. During the first session ART without LA gave the least discomfort while the Conventional method without LA gave the most discomfort. During the second treatment the least discomfort was observed with ART without LA and the most discomfort in the Conventional way with LA. There is a constant preference for hand instruments; the bur is increasingly accepted. The experience with LA is the reverse.
Sloat, Amy L; Roper, Michael G; Lin, Xiuli; Ferrance, Jerome P; Landers, James P; Colyer, Christa L
2008-08-01
In response to a growing interest in the use of smaller, faster microchip (mu-chip) methods for the separation of proteins, advancements are proposed that employ the asymmetric squarylium dye Red-1c as a noncovalent label in mu-chip CE separations. This work compares on-column and precolumn labeling methods for the proteins BSA, beta-lactoglobulin B (beta-LB), and alpha-lactalbumin (alpha-LA). Nonequilibrium CE of equilibrium mixtures (NECEEM) represents an efficient method to determine equilibrium parameters associated with the formation of intermolecular complexes, such as those formed between the dye and proteins in this work, and it allows for the use of weak affinity probes in protein quantitation. In particular, nonequilibrium methods employing both mu-chip and conventional CE systems were implemented to determine association constants governing the formation of noncovalent complexes of the red luminescent squarylium dye Red-1c with BSA and beta-LB. By our mu-chip NECEEM method, the association constants K(assoc) for beta-LB and BSA complexes with Red-1c were found to be 3.53 x 10(3) and 1.65 x 10(5) M(-1), respectively, whereas association constants found by our conventional CE-LIF NECEEM method for these same protein-dye systems were some ten times higher. Despite discrepancies between the two methods, both confirmed the preferential interaction of Red-1c with BSA. In addition, the effect of protein concentration on measured association constant was assessed by conventional CE methods. Although a small decrease in K(assoc) was observed with the increase in protein concentration, our studies indicate that absolute protein concentration may affect the equilibrium determination less than the relative concentration of protein-to-dye.
Guoxin, Hu; Ying, Yang; Yuemei, Jiang; Wenjing, Xia
2017-04-01
This study evaluated the wear of an antagonist and friction and wear properties of dental zirconia ceramic that was subjected to microwave and conventional sintering methods. Ten specimens were fabricated from Lava brand zirconia and randomly assigned to microwave and conventional sintering groups. A profile tester for surface roughness was used to measure roughness of the specimens. Wear test was performed, and steatite ceramic was used as antagonist. Friction coefficient curves were recorded, and wear volume were calculated. Finally, optical microscope was used to observe the surface morphology of zirconia and steatite ceramics. Field emission scanning electron microscopy was used to observe the microstructure of zirconia. Wear volumes of microwave and conventionally sintered zirconia were (6.940±1.382)×10⁻², (7.952±1.815) ×10⁻² mm³, respectively. Moreover, wear volumes of antagonist after sintering by the considered methods were (14.189±4.745)×10⁻², (15.813±3.481)×10⁻² mm³, correspondingly. Statistically significant difference was not observed in the wear resistance of zirconia and wear volume of steatite ceramic upon exposure to two kinds of sintering methods. Optical microscopy showed that ploughed surfaces were apparent in zirconia. The wear surface of steatite ceramic against had craze, accompanied by plough. Scanning electron microscopy showed that zirconia was sintered compactly when subjected to both conventional sintering and microwave methods, whereas grains of zirconia sintered by microwave alone were smaller and more uniform. Two kinds of sintering methods are successfully used to produce dental zirconia ceramics with similar friction and wear properties. .
Radiosurgical fistulotomy; an alternative to conventional procedure in fistula in ano.
Gupta, Pravin J
2003-01-01
Most surgeons continue to prefer the classic lay open technique [fistulotomy] as the gold standard of treatment in anal fistula. In this randomized study, a comparison is made between conventional fistulotomy and fistulotomy performed by a radio frequency device. One hundred patients of low anal fistula posted for fistulotomy were randomized prospectively to either a conventional or radio frequency technique. Parameters measured included time taken for the procedure, amount of blood loss, postoperative pain, return to work, and recurrence rate. The patient demographic was comparable in 2 groups. The radio frequency fistulotomy was quicker as compared to a conventional one [22 versus 37 minutes, p = 0.001], amount of bleeding was significantly less [47 ml versus 134 ml, p = 0.002], and hospital stay was less when patient was operated by radio frequency method [37 hours versus 56 hours in conventional method, p = 0.001]. The postoperative pain in the first 24 hours was more in conventional group [2 to 5 versus 0 to 3 on visual analogue scale]. The patients from radio frequency group resumed their duties early with a reduced healing period of the wounds [47 versus 64 days, p = 0.01]. The recurrence or failure rates were comparable in the radio frequency and conventional groups [2% versus 6%]. Fistulotomy procedure using a radio frequency technique has significant advantages over a conventional procedure with regard to operation time, blood loss, return to normal activity, and healing time of the wound.
Study of Burn Scar Extraction Automatically Based on Level Set Method using Remote Sensing Data
Liu, Yang; Dai, Qin; Liu, JianBo; Liu, ShiBin; Yang, Jin
2014-01-01
Burn scar extraction using remote sensing data is an efficient way to precisely evaluate burn area and measure vegetation recovery. Traditional burn scar extraction methodologies have no well effect on burn scar image with blurred and irregular edges. To address these issues, this paper proposes an automatic method to extract burn scar based on Level Set Method (LSM). This method utilizes the advantages of the different features in remote sensing images, as well as considers the practical needs of extracting the burn scar rapidly and automatically. This approach integrates Change Vector Analysis (CVA), Normalized Difference Vegetation Index (NDVI) and the Normalized Burn Ratio (NBR) to obtain difference image and modifies conventional Level Set Method Chan-Vese (C-V) model with a new initial curve which results from a binary image applying K-means method on fitting errors of two near-infrared band images. Landsat 5 TM and Landsat 8 OLI data sets are used to validate the proposed method. Comparison with conventional C-V model, OSTU algorithm, Fuzzy C-mean (FCM) algorithm are made to show that the proposed approach can extract the outline curve of fire burn scar effectively and exactly. The method has higher extraction accuracy and less algorithm complexity than that of the conventional C-V model. PMID:24503563
Holographic particle size extraction by using Wigner-Ville distribution
NASA Astrophysics Data System (ADS)
Chuamchaitrakool, Porntip; Widjaja, Joewono; Yoshimura, Hiroyuki
2014-06-01
A new method for measuring object size from in-line holograms by using Wigner-Ville distribution (WVD) is proposed. The proposed method has advantages over conventional numerical reconstruction in that it is free from iterative process and it can extract the object size and position with only single computation of the WVD. Experimental verification of the proposed method is presented.
Chlorine measurement in the jet singlet oxygen generator considering the effects of the droplets.
Goodarzi, Mohamad S; Saghafifar, Hossein
2016-09-01
A new method is presented to measure chlorine concentration more accurately than conventional method in exhaust gases of a jet-type singlet oxygen generator. One problem in this measurement is the existence of micrometer-sized droplets. In this article, an empirical method is reported to eliminate the effects of the droplets. Two wavelengths from a fiber coupled LED are adopted and the measurement is made on both selected wavelengths. Chlorine is measured by the two-wavelength more accurately than the one-wavelength method by eliminating the droplet term in the equations. This method is validated without the basic hydrogen peroxide injection in the reactor. In this case, a pressure meter value in the diagnostic cell is compared with the optically calculated pressure, which is obtained by the one-wavelength and the two-wavelength methods. It is found that chlorine measurement by the two-wavelength method and pressure meter is nearly the same, while the one-wavelength method has a significant error due to the droplets.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dasgupta, Aritra; Burrows, Susannah M.; Han, Kyungsik
Scientists working in a particular domain often adhere to conventional data analysis and presentation methods and this leads to familiarity with these methods over time. But does high familiarity always lead to better analytical judgment? This question is especially relevant when visualizations are used in scientific tasks, as there can be discrepancies between visualization best practices and domain conventions. However, there is little empirical evidence of the relationships between scientists’ subjective impressions about familiar and unfamiliar visualizations and objective measures of their effect on scientific judgment. To address this gap and to study these factors, we focus on the climatemore » science domain, specifically on visualizations used for comparison of model performance. We present a comprehensive user study with 47 climate scientists where we explored the following factors: i) relationships between scientists’ familiarity, their perceived levels of com- fort, confidence, accuracy, and objective measures of accuracy, and ii) relationships among domain experience, visualization familiarity, and post-study preference.« less
Talio, María Carolina; Acosta, María Gimena; Acosta, Mariano; Olsina, Roberto; Fernández, Liliana P
2015-05-15
A new method for zinc pre-concentration/separation and determination by molecular fluorescence is proposed. The metal was complexed with o-phenanthroline and eosin at pH 7.5 in Tris; a piece of filter paper was used as a solid support and solid fluorescent emission measured using a conventional quartz cuvette. Under optimal conditions, the limits of detection and quantification were 0.36 × 10(-3) and 1.29 × 10(-3) μg L(-1), respectively, and the linear range from 1.29 × 10(-3) to 4.50 μg L(-1). This method showed good sensitivity and selectivity, and it was applied to the determination of zinc in foods and tap water. The absence of filtration reduced the consumption of water and electricity. Additionally, the use of common filter papers makes it a simpler and more rapid alternative to conventional methods, with sensitivity and accuracy similar to atomic spectroscopies using a typical laboratory instrument. Copyright © 2014 Elsevier Ltd. All rights reserved.
A flood map based DOI decoding method for block detector: a GATE simulation study.
Shi, Han; Du, Dong; Su, Zhihong; Peng, Qiyu
2014-01-01
Positron Emission Tomography (PET) systems using detectors with Depth of Interaction (DOI) capabilities could achieve higher spatial resolution and better image quality than those without DOI. Up till now, most DOI methods developed are not cost-efficient for a whole body PET system. In this paper, we present a DOI decoding method based on flood map for low-cost conventional block detector with four-PMT readout. Using this method, the DOI information can be directly extracted from the DOI-related crystal spot deformation in the flood map. GATE simulations are then carried out to validate the method, confirming a DOI sorting accuracy of 85.27%. Therefore, we conclude that this method has the potential to be applied in conventional detectors to achieve a reasonable DOI measurement without dramatically increasing their complexity and cost of an entire PET system.
Adaptive compressed sensing of remote-sensing imaging based on the sparsity prediction
NASA Astrophysics Data System (ADS)
Yang, Senlin; Li, Xilong; Chong, Xin
2017-10-01
The conventional compressive sensing works based on the non-adaptive linear projections, and the parameter of its measurement times is usually set empirically. As a result, the quality of image reconstruction is always affected. Firstly, the block-based compressed sensing (BCS) with conventional selection for compressive measurements was given. Then an estimation method for the sparsity of image was proposed based on the two dimensional discrete cosine transform (2D DCT). With an energy threshold given beforehand, the DCT coefficients were processed with both energy normalization and sorting in descending order, and the sparsity of the image can be achieved by the proportion of dominant coefficients. And finally, the simulation result shows that, the method can estimate the sparsity of image effectively, and provides an active basis for the selection of compressive observation times. The result also shows that, since the selection of observation times is based on the sparse degree estimated with the energy threshold provided, the proposed method can ensure the quality of image reconstruction.
Adaptive compressed sensing of multi-view videos based on the sparsity estimation
NASA Astrophysics Data System (ADS)
Yang, Senlin; Li, Xilong; Chong, Xin
2017-11-01
The conventional compressive sensing for videos based on the non-adaptive linear projections, and the measurement times is usually set empirically. As a result, the quality of videos reconstruction is always affected. Firstly, the block-based compressed sensing (BCS) with conventional selection for compressive measurements was described. Then an estimation method for the sparsity of multi-view videos was proposed based on the two dimensional discrete wavelet transform (2D DWT). With an energy threshold given beforehand, the DWT coefficients were processed with both energy normalization and sorting by descending order, and the sparsity of the multi-view video can be achieved by the proportion of dominant coefficients. And finally, the simulation result shows that, the method can estimate the sparsity of video frame effectively, and provides an active basis for the selection of compressive observation times. The result also shows that, since the selection of observation times is based on the sparsity estimated with the energy threshold provided, the proposed method can ensure the reconstruction quality of multi-view videos.
Brown, Marty Skemp; Maurer, Martha A
2014-01-01
Abstract Objective To determine whether national drug control laws ensure that opioid drugs are available for medical and scientific purposes, as intended by the 1972 Protocol amendment to the 1961 Single Convention on Narcotic Drugs. Methods The authors examined whether the text of a convenience sample of drug laws from 15 countries: (i) acknowledged that opioid drugs are indispensable for the relief of pain and suffering; (ii) recognized that government was responsible for ensuring the adequate provision of such drugs for medical and scientific purposes; (iii) designated an administrative body for implementing international drug control conventions; and (iv) acknowledged a government’s intention to implement international conventions, including the Single Convention. Findings Most national laws were found not to contain measures that ensured adequate provision of opioid drugs for medical and scientific purposes. Moreover, the model legislation provided by the United Nations Office on Drugs and Crime did not establish an obligation on national governments to ensure the availability of these drugs for medical use. Conclusion To achieve consistency with the Single Convention, as well as with associated resolutions and recommendations of international bodies, national drug control laws and model policies should be updated to include measures that ensure drug availability to balance the restrictions imposed by the existing drug control measures needed to prevent the diversion and nonmedical use of such drugs. PMID:24623904
Zhang, Shangjian; Wang, Heng; Zou, Xinhai; Zhang, Yali; Lu, Rongguo; Liu, Yong
2015-06-15
An extinction-ratio-independent electrical method is proposed for measuring chirp parameters of Mach-Zehnder electric-optic intensity modulators based on frequency-shifted optical heterodyne. The method utilizes the electrical spectrum analysis of the heterodyne products between the intensity modulated optical signal and the frequency-shifted optical carrier, and achieves the intrinsic chirp parameters measurement at microwave region with high-frequency resolution and wide-frequency range for the Mach-Zehnder modulator with a finite extinction ratio. Moreover, the proposed method avoids calibrating the responsivity fluctuation of the photodiode in spite of the involved photodetection. Chirp parameters as a function of modulation frequency are experimentally measured and compared to those with the conventional optical spectrum analysis method. Our method enables an extinction-ratio-independent and calibration-free electrical measurement of Mach-Zehnder intensity modulators by using the high-resolution frequency-shifted heterodyne technique.
Lin, Chun-I; Lee, Yung-Chun
2014-08-01
Line-focused PVDF transducers and defocusing measurement method are applied in this work to determine the dispersion curve of the Rayleigh-like surface waves propagating along the circumferential direction of a solid cylinder. Conventional waveform processing method has been modified to cope with the non-linear relationship between phase angle of wave interference and defocusing distance induced by a cylindrically curved surface. A cross correlation method is proposed to accurately extract the cylindrical Rayleigh wave velocity from measured data. Experiments have been carried out on one stainless steel and one glass cylinders. The experimentally obtained dispersion curves are in very good agreement with their theoretical counterparts. Variation of cylindrical Rayleigh wave velocity due to the cylindrical curvature is quantitatively verified using this new method. Other potential applications of this measurement method for cylindrical samples will be addressed. Copyright © 2014 Elsevier B.V. All rights reserved.
Optical methods for non-contact measurements of membranes
NASA Astrophysics Data System (ADS)
Roose, S.; Stockman, Y.; Rochus, P.; Kuhn, T.; Lang, M.; Baier, H.; Langlois, S.; Casarosa, G.
2009-11-01
Structures for space applications very often suffer stringent mass constraints. Lightweight structures are developed for this purpose, through the use of deployable and/or inflatable beams, and thin-film membranes. Their inherent properties (low mass and small thickness) preclude the use of conventional measurement methods (accelerometers and displacement transducers for example) during on-ground testing. In this context, innovative non-contact measurement methods need to be investigated for these stretched membranes. The object of the present project is to review existing measurement systems capable of measuring characteristics of membrane space-structures such as: dot-projection videogrammetry (static measurements), stereo-correlation (dynamic and static measurements), fringe projection (wrinkles) and 3D laser scanning vibrometry (dynamic measurements). Therefore, minimum requirements were given for the study in order to have representative test articles covering a wide range of applications. We present test results obtained with the different methods on our test articles.
NASA Astrophysics Data System (ADS)
Tang, Huijuan; Hao, Xiaojian; Hu, Xiaotao
2018-01-01
In the case of conventional contact temperature measurement, there is a delay phenomenon and high temperature resistant materials limitation. By using the faster response speed and theoretically no upper limit of the non-contact temperature method, the measurement system based on the principle of double line atomic emission spectroscopy temperature measurement is put forward, the structure and theory of temperature measuring device are introduced. According to the atomic spectrum database (ASD), Aluminum(Al) I 690.6 nm and Al I 708.5 nm are selected as the two lines in the temperature measurement. The intensity ratio of the two emission lines was measured by a spectrometer to obtain the temperature of Al burning in pure oxygen, and the result compared to the temperature measured by the thermocouple. It turns out that the temperature correlation between the two methods is good, and it proves the feasibility of the method.
Effect of Robot-Assisted Game Training on Upper Extremity Function in Stroke Patients
2017-01-01
Objective To determine the effects of combining robot-assisted game training with conventional upper extremity rehabilitation training (RCT) on motor and daily functions in comparison with conventional upper extremity rehabilitation training (OCT) in stroke patients. Methods Subjects were eligible if they were able to perform the robot-assisted game training and were divided randomly into a RCT and an OCT group. The RCT group performed one daily session of 30 minutes of robot-assisted game training with a rehabilitation robot, plus one daily session of 30 minutes of conventional rehabilitation training, 5 days a week for 2 weeks. The OCT group performed two daily sessions of 30 minutes of conventional rehabilitation training. The effects of training were measured by a Manual Function Test (MFT), Manual Muscle Test (MMT), Korean version of the Modified Barthel Index (K-MBI) and a questionnaire about satisfaction with training. These measurements were taken before and after the 2-week training. Results Both groups contained 25 subjects. After training, both groups showed significant improvements in motor and daily functions measured by MFT, MMT, and K-MBI compared to the baseline. Both groups demonstrated similar training effects, except motor power of wrist flexion. Patients in the RCT group were more satisfied than those in the OCT group. Conclusion There were no significant differences in changes in most of the motor and daily functions between the two types of training. However, patients in the RCT group were more satisfied than those in the OCT group. Therefore, RCT could be a useful upper extremity rehabilitation training method. PMID:28971037
40 CFR 80.46 - Measurement of reformulated gasoline and conventional gasoline fuel parameters.
Code of Federal Regulations, 2014 CFR
2014-07-01
... D7039-13, Standard Test Method for Sulfur in Gasoline, Diesel Fuel, Jet Fuel, Kerosine, Biodiesel, Biodiesel Blends, and Gasoline-Ethanol Blends by Monochromatic Wavelength Dispersive X-ray Fluorescence...
Step-height measurement with a low coherence interferometer using continuous wavelet transform
NASA Astrophysics Data System (ADS)
Jian, Zhang; Suzuki, Takamasa; Choi, Samuel; Sasaki, Osami
2013-12-01
With the development of electronic technology in recent years, electronic components become increasingly miniaturized. At the same time a more accurate measurement method becomes indispensable. In the current measurement of nano-level, the Michelson interferometer with the laser diode is widely used, the method can measure the object accurately without touching the object. However it can't measure the step height that is larger than the half-wavelength. In this study, we improve the conventional Michelson interferometer by using a super luminescent diode and continuous wavelet transform, which can detect the time that maximizes the amplitude of the interference signal. We can accurately measure the surface-position of the object with this time. The method used in this experiment measured the step height of 20 microns.
Dynamic Range Enhancement of High-Speed Electrical Signal Data via Non-Linear Compression
NASA Technical Reports Server (NTRS)
Laun, Matthew C. (Inventor)
2016-01-01
Systems and methods for high-speed compression of dynamic electrical signal waveforms to extend the measuring capabilities of conventional measuring devices such as oscilloscopes and high-speed data acquisition systems are discussed. Transfer function components and algorithmic transfer functions can be used to accurately measure signals that are within the frequency bandwidth but beyond the voltage range and voltage resolution capabilities of the measuring device.
Acoustic analysis of the propfan
NASA Technical Reports Server (NTRS)
Farassat, F.; Succi, G. P.
1979-01-01
A review of propeller noise prediction technology is presented. Two methods for the prediction of the noise from conventional and advanced propellers in forward flight are described. These methods are based on different time domain formulations. Brief descriptions of the computer algorithms based on these formulations are given. The output of the programs (the acoustic pressure signature) was Fourier analyzed to get the acoustic pressure spectrum. The main difference between the two programs is that one can handle propellers with supersonic tip speed while the other is for subsonic tip speed propellers. Comparisons of the calculated and measured acoustic data for a conventional and an advanced propeller show good agreement in general.
Conductive polymer foam surface improves the performance of a capacitive EEG electrode.
Baek, Hyun Jae; Lee, Hong Ji; Lim, Yong Gyu; Park, Kwang Suk
2012-12-01
In this paper, a new conductive polymer foam-surfaced electrode was proposed for use as a capacitive EEG electrode for nonintrusive EEG measurements in out-of-hospital environments. The current capacitive electrode has a rigid surface that produces an undefined contact area due to its stiffness, which renders it unable to conform to head curvature and locally isolates hairs between the electrode surface and scalp skin, making EEG measurement through hair difficult. In order to overcome this issue, a conductive polymer foam was applied to the capacitive electrode surface to provide a cushioning effect. This enabled EEG measurement through hair without any conductive contact with bare scalp skin. Experimental results showed that the new electrode provided lower electrode-skin impedance and higher voltage gains, signal-to-noise ratios, signal-to-error ratios, and correlation coefficients between EEGs measured by capacitive and conventional resistive methods compared to a conventional capacitive electrode. In addition, the new electrode could measure EEG signals, while the conventional capacitive electrode could not. We expect that the new electrode presented here can be easily installed in a hat or helmet to create a nonintrusive wearable EEG apparatus that does not make users look strange for real-world EEG applications.
Al-Omiri, Mahmoud K; Sghaireen, Mohd G; Alzarea, Bader K; Lynch, Edward
2013-12-01
This study aimed to quantify tooth wear in upper anterior teeth using a new CAD-CAM Laser scanning machine, tool maker microscope and conventional tooth wear index. Fifty participants (25 males and 25 females, mean age = 25 ± 4 years) were assessed for incisal tooth wear of upper anterior teeth using Smith and Knight clinical tooth wear index (TWI) on two occasions, the study baseline and 1 year later. Stone dies for each tooth were prepared and scanned using the CAD-CAM Laser Cercon System. Scanned images were printed and examined under a toolmaker microscope to quantify tooth wear and then the dies were directly assessed under the microscope to measure tooth wear. The Wilcoxon Signed Ranks Test was used to analyze the data. TWI scores for incisal edges were 0-3 and were similar at both occasions. Score 4 was not detected. Wear values measured by directly assessing the dies under the toolmaker microscope (range = 113 - 150 μm, mean = 130 ± 20 μm) were significantly more than those measured from Cercon Digital Machine images (range=52-80 μm, mean = 68 ± 23 μm) and both showed significant differences between the two occasions. Wear progression in upper anterior teeth was effectively detected by directly measuring the dies or the images of dies under toolmaker microscope. Measuring the dies of worn dentition directly under tool maker microscope enabled detection of wear progression more accurately than measuring die images obtained with Cercon Digital Machine. Conventional method was the least sensitive for tooth wear quantification and was unable to identify wear progression in most cases. Copyright © 2013 Elsevier Ltd. All rights reserved.
Zhang, Lida; Sun, Da-Wen; Zhang, Zhihang
2017-03-24
Moisture sorption isotherm is commonly determined by saturated salt slurry method, which has defects of long time cost, cumbersome labor, and microbial deterioration of samples. Thus, a novel method, a w measurement (AWM) method, has been developed to overcome these drawbacks. Fundamentals and applications of this fast method have been introduced with respects to its typical operational steps, a variety of equipment set-ups and applied samples. The resultant rapidness and reliability have been evaluated by comparing with conventional methods. This review also discussed factors impairing measurement precision and accuracy, including inappropriate choice of predryingwetting techniques and unachieved moisture uniformity in samples due to inadequate time. This analysis and corresponding suggestions can facilitate improved AWM method with more satisfying accuracy and time cost.
Impedance measurement using a two-microphone, random-excitation method
NASA Technical Reports Server (NTRS)
Seybert, A. F.; Parrott, T. L.
1978-01-01
The feasibility of using a two-microphone, random-excitation technique for the measurement of acoustic impedance was studied. Equations were developed, including the effect of mean flow, which show that acoustic impedance is related to the pressure ratio and phase difference between two points in a duct carrying plane waves only. The impedances of a honeycomb ceramic specimen and a Helmholtz resonator were measured and compared with impedances obtained using the conventional standing-wave method. Agreement between the two methods was generally good. A sensitivity analysis was performed to pinpoint possible error sources and recommendations were made for future study. The two-microphone approach evaluated in this study appears to have some advantages over other impedance measuring techniques.
Highly Specific and Wide Range NO2 Sensor with Color Readout.
Fàbrega, Cristian; Fernández, Luis; Monereo, Oriol; Pons-Balagué, Alba; Xuriguera, Elena; Casals, Olga; Waag, Andreas; Prades, Joan Daniel
2017-11-22
We present a simple and inexpensive method to implement a Griess-Saltzman-type reaction that combines the advantages of the liquid phase method (high specificity and fast response time) with the benefits of a solid implementation (easy to handle). We demonstrate that the measurements can be carried out using conventional RGB sensors; circumventing all the limitations around the measurement of the samples with spectrometers. We also present a method to optimize the measurement protocol and target a specific range of NO 2 concentrations. We demonstrate that it is possible to measure the concentration of NO 2 from 50 ppb to 300 ppm with high specificity and without modifying the Griess-Saltzman reagent.
Palpation simulator with stable haptic feedback.
Kim, Sang-Youn; Ryu, Jee-Hwan; Lee, WooJeong
2015-01-01
The main difficulty in constructing palpation simulators is to compute and to generate stable and realistic haptic feedback without vibration. When a user haptically interacts with highly non-homogeneous soft tissues through a palpation simulator, a sudden change of stiffness in target tissues causes unstable interaction with the object. We propose a model consisting of a virtual adjustable damper and an energy measuring element. The energy measuring element gauges energy which is stored in a palpation simulator and the virtual adjustable damper dissipates the energy to achieve stable haptic interaction. To investigate the haptic behavior of the proposed method, impulse and continuous inputs are provided to target tissues. If a haptic interface point meets with the hardest portion in the target tissues modeled with a conventional method, we observe unstable motion and feedback force. However, when the target tissues are modeled with the proposed method, a palpation simulator provides stable interaction without vibration. The proposed method overcomes a problem in conventional haptic palpation simulators where unstable force or vibration can be generated if there is a big discrepancy in material property between an element and its neighboring elements in target tissues.
Highly-sensitive troponin I is increased in patients with gynecological cancers.
Danese, Elisa; Montagnana, Martina; Giudici, Silvia; Aloe, Rosalia; Franchi, Massimo; Guidi, Gian Cesare; Lippi, Giuseppe
2013-08-01
To investigate troponin I (TnI) in patients with gynecological cancers. Highly-sensitive (HS) and conventional TnI were measured in 25 patients with untreated ovarian cancer, 25 with endometriosis and 25 with benign masses. Both HS and conventional TnI were increase in cancer patients. Values above the cut-off were found in 44% and 16% cancer patients using HS and conventional TnI methods, respectively. Cardiac involvement is frequent in patients with gynecological cancers and should be preferably assessed using HS troponin immunoassays. Copyright © 2013 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.
Grimaud, Élisabeth; Taconnat, Laurence; Clarys, David
2017-06-01
The aim of this study was to compare two methods of cognitive stimulation for the cognitive functions. The first method used an usual approach, the second used leisure activities in order to assess their benefits on cognitive functions (speed of processing; working memory capacity and executive functions) and psychoaffective measures (memory span and self esteem). 67 participants over 60 years old took part in the experiment. They were divided into three groups: 1 group followed a program of conventional cognitive stimulation, 1 group a program of cognitive stimulation using leisure activities and 1 control group. The different measures have been evaluated before and after the training program. Results show that the cognitive stimulation program using leisure activities is as effective on memory span, updating and memory self-perception as the program using conventional cognitive stimulation, and more effective on self-esteem than the conventional program. There is no difference between the two stimulated groups and the control group on speed of processing. Neither of the two cognitive stimulation programs provides a benefit over shifting and inhibition. These results indicate that it seems to be possible to enhance working memory and to observe far transfer benefits over self-perception (self-esteem and memory self-perception) when using leisure activities as a tool for cognitive stimulation.
NASA Astrophysics Data System (ADS)
Yang, Linlin; Sun, Hai; Fu, Xudong; Wang, Suli; Jiang, Luhua; Sun, Gongquan
2014-07-01
A novel method for measuring effective diffusion coefficient of porous materials is developed. The oxygen concentration gradient is established by an air-breathing proton exchange membrane fuel cell (PEMFC). The porous sample is set in a sample holder located in the cathode plate of the PEMFC. At a given oxygen flux, the effective diffusion coefficients are related to the difference of oxygen concentration across the samples, which can be correlated with the differences of the output voltage of the PEMFC with and without inserting the sample in the cathode plate. Compared to the conventional electrical conductivity method, this method is more reliable for measuring non-wetting samples.
Kim, Jae-Hong; Kim, Ki-Baek; Kim, Woong-Chul; Kim, Ji-Hwan
2014-01-01
Objective This study aimed to evaluate the accuracy and precision of polyurethane (PUT) dental arch models fabricated using a three-dimensional (3D) subtractive rapid prototyping (RP) method with an intraoral scanning technique by comparing linear measurements obtained from PUT models and conventional plaster models. Methods Ten plaster models were duplicated using a selected standard master model and conventional impression, and 10 PUT models were duplicated using the 3D subtractive RP technique with an oral scanner. Six linear measurements were evaluated in terms of x, y, and z-axes using a non-contact white light scanner. Accuracy was assessed using mean differences between two measurements, and precision was examined using four quantitative methods and the Bland-Altman graphical method. Repeatability was evaluated in terms of intra-examiner variability, and reproducibility was assessed in terms of inter-examiner and inter-method variability. Results The mean difference between plaster models and PUT models ranged from 0.07 mm to 0.33 mm. Relative measurement errors ranged from 2.2% to 7.6% and intraclass correlation coefficients ranged from 0.93 to 0.96, when comparing plaster models and PUT models. The Bland-Altman plot showed good agreement. Conclusions The accuracy and precision of PUT dental models for evaluating the performance of oral scanner and subtractive RP technology was acceptable. Because of the recent improvements in block material and computerized numeric control milling machines, the subtractive RP method may be a good choice for dental arch models. PMID:24696823
A noninvasive method of examination of the hemostasis system.
Kuznik, B I; Fine, I W; Kaminsky, A V
2011-09-01
We propose a noninvasive method of in vivo examination the hemostasis system based on speckle pattern analysis of coherent light scattering from the skin. We compared the results of measuring basic blood coagulation parameters by conventional invasive and noninvasive methods. A strict correlation was found between the results of measurement of soluble fibrin monomer complexes, international normalized ratio (INR), prothrombin index, and protein C content. The noninvasive method of examination of the hemostatic system enable rough evaluation of the intensity of the intravascular coagulation and correction of the dose of indirect anticoagulants maintaining desired values of INR or prothrombin index.
Acoustic emission strand burning technique for motor burning rate prediction
NASA Technical Reports Server (NTRS)
Christensen, W. N.
1978-01-01
An acoustic emission (AE) method is being used to measure the burning rate of solid propellant strands. This method has a precision of 0.5% and excellent burning rate correlation with both subscale and large rocket motors. The AE procedure burns the sample under water and measures the burning rate from the acoustic output. The acoustic signal provides a continuous readout during testing, which allows complete data analysis rather than the start-stop clockwires used by the conventional method. The AE method helps eliminate such problems as inhibiting the sample, pressure increase and temperature rise, during testing.
In vivo lateral blood flow velocity measurement using speckle size estimation.
Xu, Tiantian; Hozan, Mohsen; Bashford, Gregory R
2014-05-01
In previous studies, we proposed blood measurement using speckle size estimation, which estimates the lateral component of blood flow within a single image frame based on the observation that the speckle pattern corresponding to blood reflectors (typically red blood cells) stretches (i.e., is "smeared") if blood flow is in the same direction as the electronically controlled transducer line selection in a 2-D image. In this observational study, the clinical viability of ultrasound blood flow velocity measurement using speckle size estimation was investigated and compared with that of conventional spectral Doppler of carotid artery blood flow data collected from human patients in vivo. Ten patients (six male, four female) were recruited. Right carotid artery blood flow data were collected in an interleaved fashion (alternating Doppler and B-mode A-lines) with an Antares Ultrasound Imaging System and transferred to a PC via the Axius Ultrasound Research Interface. The scanning velocity was 77 cm/s, and a 4-s interval of flow data were collected from each subject to cover three to five complete cardiac cycles. Conventional spectral Doppler data were collected simultaneously to compare with estimates made by speckle size estimation. The results indicate that the peak systolic velocities measured with the two methods are comparable (within ±10%) if the scan velocity is greater than or equal to the flow velocity. When scan velocity is slower than peak systolic velocity, the speckle stretch method asymptotes to the scan velocity. Thus, the speckle stretch method is able to accurately measure pure lateral flow, which conventional Doppler cannot do. In addition, an initial comparison of the speckle size estimation and color Doppler methods with respect to computational complexity and data acquisition time indicated potential time savings in blood flow velocity estimation using speckle size estimation. Further studies are needed for calculation of the speckle stretch method across a field of view and combination with an appropriate axial flow estimator. Copyright © 2014 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.
Resistivity Measurement by Dual-Configuration Four-Probe Method
NASA Astrophysics Data System (ADS)
Yamashita, Masato; Nishii, Toshifumi; Mizutani, Hiroya
2003-02-01
The American Society for Testing and Materials (ASTM) Committee has published a new technique for the measurement of resistivity which is termed the dual-configuration four-probe method. The resistivity correction factor is the function of only the data which are obtained from two different electrical configurations of the four probes. The measurement of resistivity and sheet resistance are performed for graphite rectangular plates and indium tin oxide (ITO) films by the conventional four-probe method and the dual-configuration four-probe method. It is demonstrated that the dual-configuration four-probe method which includes a probe array with equal separations of 10 mm can be applied to specimens having thicknesses up to 3.7 mm if a relative resistivity difference up to 5% is allowed.
Tsang, C.F.; Doughty, C.A.
1984-02-24
A well-test method involving injection of hot (or cold) water into a groundwater aquifer, or injecting cold water into a geothermal reservoir is disclosed. By making temperature measurements at various depths in one or more observation wells, certain properties of the aquifer are determined. These properties, not obtainable from conventional well test procedures, include the permeability anisotropy, and layering in the aquifer, and in-situ thermal properties. The temperature measurements at various depths are obtained from thermistors mounted in the observation wells.
Tsang, Chin-Fu; Doughty, Christine A.
1985-01-01
A well-test method involving injection of hot (or cold) water into a groundwater aquifer, or injecting cold water into a geothermal reservoir. By making temperature measurements at various depths in one or more observation wells, certain properties of the aquifer are determined. These properties, not obtainable from conventional well test procedures, include the permeability anisotropy, and layering in the aquifer, and in-situ thermal properties. The temperature measurements at various depths are obtained from thermistors mounted in the observation wells.
Mäkinen, Marja-Tellervo; Pesonen, Anne; Jousela, Irma; Päivärinta, Janne; Poikajärvi, Satu; Albäck, Anders; Salminen, Ulla-Stina; Pesonen, Eero
2016-08-01
The aim of this study was to compare deep body temperature obtained using a novel noninvasive continuous zero-heat-flux temperature measurement system with core temperatures obtained using conventional methods. A prospective, observational study. Operating room of a university hospital. The study comprised 15 patients undergoing vascular surgery of the lower extremities and 15 patients undergoing cardiac surgery with cardiopulmonary bypass. Zero-heat-flux thermometry on the forehead and standard core temperature measurements. Body temperature was measured using a new thermometry system (SpotOn; 3M, St. Paul, MN) on the forehead and with conventional methods in the esophagus during vascular surgery (n = 15), and in the nasopharynx and pulmonary artery during cardiac surgery (n = 15). The agreement between SpotOn and the conventional methods was assessed using the Bland-Altman random-effects approach for repeated measures. The mean difference between SpotOn and the esophageal temperature during vascular surgery was+0.08°C (95% limit of agreement -0.25 to+0.40°C). During cardiac surgery, during off CPB, the mean difference between SpotOn and the pulmonary arterial temperature was -0.05°C (95% limits of agreement -0.56 to+0.47°C). Throughout cardiac surgery (on and off CPB), the mean difference between SpotOn and the nasopharyngeal temperature was -0.12°C (95% limits of agreement -0.94 to+0.71°C). Poor agreement between the SpotOn and nasopharyngeal temperatures was detected in hypothermia below approximately 32°C. According to this preliminary study, the deep body temperature measured using the zero-heat-flux system was in good agreement with standard core temperatures during lower extremity vascular and cardiac surgery. However, agreement was questionable during hypothermia below 32°C. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Takamasu, Kiyoshi; Takahashi, Satoru; Kawada, Hiroki; Ikota, Masami
2018-03-01
LER (Line Edge Roughness) and LWR (Line Width Roughness) of the semiconductor device are an important evaluation scale of the performance of the device. Conventionally, LER and LWR is evaluated from CD-SEM (Critical Dimension Scanning Electron Microscope) images. However, CD-SEM measurement has a problem that high frequency random noise is large, and resolution is not sufficiently high. For random noise of CD-SEM measurement, some techniques are proposed. In these methods, it is necessary to set parameters for model and processing, and it is necessary to verify the correctness of these parameters using reference metrology. We have already proposed a novel reference metrology using FIB (Focused Ion Beam) process and planar-TEM (Transmission Electron Microscope) method. In this study, we applied the proposed method to three new samples such as SAQP (Self-Aligned Quadruple Patterning) FinFET device, EUV (Extreme Ultraviolet Lithography) conventional resist, and EUV new material resist. LWR and PSD (Power Spectral Density) of LWR are calculated from the edge positions on planar-TEM images. We confirmed that LWR and PSD of LWR can be measured with high accuracy and evaluated the difference by the proposed method. Furthermore, from comparisons with PSD of the same sample by CD-SEM, the validity of measurement of PSD and LWR by CD-SEM can be verified.
NASA Astrophysics Data System (ADS)
Kilic, Veli Tayfun; Unal, Emre; Demir, Hilmi Volkan
2017-05-01
In this work, we investigate a method proposed for vessel detection and coil powering in an all-surface inductive heating system composed of outer squircle coils. Besides conventional circular coils, coils with different shapes such as outer squircle coils are used for and enable efficient all-surface inductive heating. Validity of the method, which relies on measuring inductance and resistance values of a loaded coil at different frequencies, is experimentally demonstrated for a coil with shape different from conventional circular coil. Simple setup was constructed with a small coil to model an all-surface inductive heating system. Inductance and resistance maps were generated by measuring coil's inductance and resistance values at different frequencies loaded by a plate made of different materials and located at various positions. Results show that in an induction hob for various coil geometries it is possible to detect a vessel's presence, to identify its material type and to specify its position on the hob surface by considering inductance and resistance of the coil measured on at least two different frequencies. The studied method is important in terms of enabling safe, efficient and user flexible heating in an all-surface inductive heating system by automatically detecting the vessel's presence and powering on only the coils that are loaded by the vessel with predetermined current levels.
Pulmonary lobar volumetry using novel volumetric computer-aided diagnosis and computed tomography
Iwano, Shingo; Kitano, Mariko; Matsuo, Keiji; Kawakami, Kenichi; Koike, Wataru; Kishimoto, Mariko; Inoue, Tsutomu; Li, Yuanzhong; Naganawa, Shinji
2013-01-01
OBJECTIVES To compare the accuracy of pulmonary lobar volumetry using the conventional number of segments method and novel volumetric computer-aided diagnosis using 3D computed tomography images. METHODS We acquired 50 consecutive preoperative 3D computed tomography examinations for lung tumours reconstructed at 1-mm slice thicknesses. We calculated the lobar volume and the emphysematous lobar volume < −950 HU of each lobe using (i) the slice-by-slice method (reference standard), (ii) number of segments method, and (iii) semi-automatic and (iv) automatic computer-aided diagnosis. We determined Pearson correlation coefficients between the reference standard and the three other methods for lobar volumes and emphysematous lobar volumes. We also compared the relative errors among the three measurement methods. RESULTS Both semi-automatic and automatic computer-aided diagnosis results were more strongly correlated with the reference standard than the number of segments method. The correlation coefficients for automatic computer-aided diagnosis were slightly lower than those for semi-automatic computer-aided diagnosis because there was one outlier among 50 cases (2%) in the right upper lobe and two outliers among 50 cases (4%) in the other lobes. The number of segments method relative error was significantly greater than those for semi-automatic and automatic computer-aided diagnosis (P < 0.001). The computational time for automatic computer-aided diagnosis was 1/2 to 2/3 than that of semi-automatic computer-aided diagnosis. CONCLUSIONS A novel lobar volumetry computer-aided diagnosis system could more precisely measure lobar volumes than the conventional number of segments method. Because semi-automatic computer-aided diagnosis and automatic computer-aided diagnosis were complementary, in clinical use, it would be more practical to first measure volumes by automatic computer-aided diagnosis, and then use semi-automatic measurements if automatic computer-aided diagnosis failed. PMID:23526418
Harris, C; Alcock, A; Trefan, L; Nuttall, D; Evans, S T; Maguire, S; Kemp, A M
2018-02-01
Bruising is a common abusive injury in children, and it is standard practice to image and measure them, yet there is no current standard for measuring bruise size consistently. We aim to identify the optimal method of measuring photographic images of bruises, including computerised measurement techniques. 24 children aged <11 years (mean age of 6.9, range 2.5-10 years) with a bruise were recruited from the community. Demographics and bruise details were recorded. Each bruise was measured in vivo using a paper measuring tape. Standardised conventional and cross polarized digital images were obtained. The diameter of bruise images were measured by three computer aided measurement techniques: Image J (segmentation with Simple Interactive Object Extraction (maximum Feret diameter), 'Circular Selection Tool' (Circle diameter), & the Photoshop 'ruler' software (Photoshop diameter)). Inter and intra-observer effects were determined by two individuals repeating 11 electronic measurements, and relevant Intraclass Correlation Coefficient's (ICC's) were used to establish reliability. Spearman's rank correlation was used to compare in vivo with computerised measurements; a comparison of measurement techniques across imaging modalities was conducted using Kolmogorov-Smirnov tests. Significance was set at p < 0.05 for all tests. Images were available for 38 bruises in vivo, with 48 bruises visible on cross polarized imaging and 46 on conventional imaging (some bruises interpreted as being single in vivo appeared to be multiple in digital images). Correlation coefficients were >0.5 for all techniques, with maximum Feret diameter and maximum Photoshop diameter on conventional images having the strongest correlation with in vivo measurements. There were significant differences between in vivo and computer-aided measurements, but none between different computer-aided measurement techniques. Overall, computer aided measurements appeared larger than in vivo. Inter- and intra-observer agreement was high for all maximum diameter measurements (ICC's > 0.7). Whilst there are minimal differences between measurements of images obtained, the most consistent results were obtained when conventional images, segmented by Image J Software, were measured with a Feret diameter. This is therefore proposed as a standard for future research, and forensic practice, with the proviso that all computer aided measurements appear larger than in vivo. Copyright © 2018 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.
NASA Technical Reports Server (NTRS)
Kojima, Jun; Nguyen, Quang-Viet
2007-01-01
An alternative optical thermometry technique that utilizes the low-resolution (order 10(exp 1)/cm) pure-rotational spontaneous Raman scattering of air is developed to aid single-shot multiscalar measurements in turbulent combustion studies. Temperature measurements are realized by correlating the measured envelope bandwidth of the pure-rotational manifold of the N2/O2 spectrum with a theoretical prediction of a species-weighted bandwidth. By coupling this thermometry technique with conventional vibrational Raman scattering for species determination, we demonstrate quantitative spatially resolved, single-shot measurements of the temperature and fuel/oxidizer concentrations in a high-pressure turbulent Cf4-air flame. Our technique provides not only an effective means of validating other temperature measurement methods, but also serves as a secondary thermometry technique in cases where the anti-Stokes vibrational N2 Raman signals are too low for a conventional vibrational temperature analysis.
Similarity estimation for reference image retrieval in mammograms using convolutional neural network
NASA Astrophysics Data System (ADS)
Muramatsu, Chisako; Higuchi, Shunichi; Morita, Takako; Oiwa, Mikinao; Fujita, Hiroshi
2018-02-01
Periodic breast cancer screening with mammography is considered effective in decreasing breast cancer mortality. For screening programs to be successful, an intelligent image analytic system may support radiologists' efficient image interpretation. In our previous studies, we have investigated image retrieval schemes for diagnostic references of breast lesions on mammograms and ultrasound images. Using a machine learning method, reliable similarity measures that agree with radiologists' similarity were determined and relevant images could be retrieved. However, our previous method includes a feature extraction step, in which hand crafted features were determined based on manual outlines of the masses. Obtaining the manual outlines of masses is not practical in clinical practice and such data would be operator-dependent. In this study, we investigated a similarity estimation scheme using a convolutional neural network (CNN) to skip such procedure and to determine data-driven similarity scores. By using CNN as feature extractor, in which extracted features were employed in determination of similarity measures with a conventional 3-layered neural network, the determined similarity measures were correlated well with the subjective ratings and the precision of retrieving diagnostically relevant images was comparable with that of the conventional method using handcrafted features. By using CNN for determination of similarity measure directly, the result was also comparable. By optimizing the network parameters, results may be further improved. The proposed method has a potential usefulness in determination of similarity measure without precise lesion outlines for retrieval of similar mass images on mammograms.
ERIC Educational Resources Information Center
Jarvenoja, Hanna; Volet, Simone; Jarvela, Sanna
2013-01-01
Self-regulated learning (SRL) research has conventionally relied on measures, which treat SRL as an aptitude. To study self-regulation and motivation in learning contexts as an ongoing adaptive process, situation-specific methods are needed in addition to static measures. This article presents an "Adaptive Instrument for Regulation of Emotions"…
NASA Astrophysics Data System (ADS)
Liu, Xiaohua; Zhou, Tianfeng; Zhang, Lin; Zhou, Wenchen; Yu, Jianfeng; Lee, L. James; Yi, Allen Y.
2018-07-01
Silicon is a promising mold material for compression molding because of its properties of hardness and abrasion resistance. Silicon wafers with carbide-bonded graphene coating and micro-patterns were evaluated as molds for the fabrication of microlens arrays. This study presents an efficient but flexible manufacturing method for microlens arrays that combines a lapping method and a rapid molding procedure. Unlike conventional processes for microstructures on silicon wafers, such as diamond machining and photolithography, this research demonstrates a unique approach by employing precision steel balls and diamond slurries to create microlenses with accurate geometry. The feasibility of this method was demonstrated by the fabrication of several microlens arrays with different aperture sizes and pitches on silicon molds. The geometrical accuracy and surface roughness of the microlens arrays were measured using an optical profiler. The measurement results indicated good agreement with the optical profile of the design. The silicon molds were then used to copy the microstructures onto polymer substrates. The uniformity and quality of the samples molded through rapid surface molding were also assessed and statistically quantified. To further evaluate the optical functionality of the molded microlens arrays, the focal lengths of the microlens arrays were measured using a simple optical setup. The measurements showed that the microlens arrays molded in this research were compatible with conventional manufacturing methods. This research demonstrated an alternative low-cost and efficient method for microstructure fabrication on silicon wafers, together with the follow-up optical molding processes.
Remote sensing techniques for prediction of watershed runoff
NASA Technical Reports Server (NTRS)
Blanchard, B. J.
1975-01-01
Hydrologic parameters of watersheds for use in mathematical models and as design criteria for flood detention structures are sometimes difficult to quantify using conventional measuring systems. The advent of remote sensing devices developed in the past decade offers the possibility that watershed characteristics such as vegetative cover, soils, soil moisture, etc., may be quantified rapidly and economically. Experiments with visible and near infrared data from the LANDSAT-1 multispectral scanner indicate a simple technique for calibration of runoff equation coefficients is feasible. The technique was tested on 10 watersheds in the Chickasha area and test results show more accurate runoff coefficients were obtained than with conventional methods. The technique worked equally as well using a dry fall scene. The runoff equation coefficients were then predicted for 22 subwatersheds with flood detention structures. Predicted values were again more accurate than coefficients produced by conventional methods.
NASA Astrophysics Data System (ADS)
Mao, Cuili; Lu, Rongsheng; Liu, Zhijian
2018-07-01
In fringe projection profilometry, the phase errors caused by the nonlinear intensity response of digital projectors needs to be correctly compensated. In this paper, a multi-frequency inverse-phase method is proposed. The theoretical model of periodical phase errors is analyzed. The periodical phase errors can be adaptively compensated in the wrapped maps by using a set of fringe patterns. The compensated phase is then unwrapped with multi-frequency method. Compared with conventional methods, the proposed method can greatly reduce the periodical phase error without calibrating measurement system. Some simulation and experimental results are presented to demonstrate the validity of the proposed approach.
Standardizing lightweight deflectometer modulus measurements for compaction quality assurance
DOT National Transportation Integrated Search
2017-09-01
To evaluate the compaction of unbound geomaterials under unsaturated conditions and replace the conventional methods with a practical modulus-based specification using LWD, this study examined three different LWDs, the Zorn ZFG 3000 LWD, Dynatest 303...
Turbo fluid machinery and diffusers
NASA Technical Reports Server (NTRS)
Sakurai, T.
1984-01-01
The general theory behind turbo devices and diffusers is explained. Problems and the state of research on basic equations of flow and experimental and measuring methods are discussed. Conventional centrifugation-type compressor and fan diffusers are considered in detail.
Aragón, Mônica L C; Pontes, Luana F; Bichara, Lívia M; Flores-Mir, Carlos; Normando, David
2016-08-01
The development of 3D technology and the trend of increasing the use of intraoral scanners in dental office routine lead to the need for comparisons with conventional techniques. To determine if intra- and inter-arch measurements from digital dental models acquired by an intraoral scanner are as reliable and valid as the similar measurements achieved from dental models obtained through conventional intraoral impressions. An unrestricted electronic search of seven databases until February 2015. Studies that focused on the accuracy and reliability of images obtained from intraoral scanners compared to images obtained from conventional impressions. After study selection the QUADAS risk of bias assessment tool for diagnostic studies was used to assess the risk of bias (RoB) among the included studies. Four articles were included in the qualitative synthesis. The scanners evaluated were OrthoProof, Lava, iOC intraoral, Lava COS, iTero and D250. These studies evaluated the reliability of tooth widths, Bolton ratio measurements, and image superimposition. Two studies were classified as having low RoB; one had moderate RoB and the remaining one had high RoB. Only one study evaluated the time required to complete clinical procedures and patient's opinion about the procedure. Patients reported feeling more comfortable with the conventional dental impression method. Associated costs were not considered in any of the included study. Inter- and intra-arch measurements from digital models produced from intraoral scans appeared to be reliable and accurate in comparison to those from conventional impressions. This assessment only applies to the intraoral scanners models considered in the finally included studies. Digital models produced by intraoral scan eliminate the need of impressions materials; however, currently, longer time is needed to take the digital images. PROSPERO (CRD42014009702). None. © The Author 2016. Published by Oxford University Press on behalf of the European Orthodontic Society. All rights reserved. For permissions, please email: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Nakano, Keita; Watanabe, Yukinobu; Kawase, Shoichiro; Wang, He; Otsu, Hideaki; Sakurai, Hiroyoshi; Takeuchi, Satoshi; Togano, Yasuhiro; Nakamura, Takashi; Maeda, Yukie; Ahn, Deuk Soon; Aikawa, Masayuki; Araki, Shouhei; Chen, Sidong; Chiga, Nobuyuki; Doornenbal, Pieter; Fukuda, Naoki; Ichihara, Takashi; Isobe, Tadaaki; Kawakami, Shunsuke; Kin, Tadahiro; Kondo, Yosuke; Koyama, Shunpei; Kubo, Toshiyuki; Kubono, Shigeru; Kurokawa, Meiko; Makinaga, Ayano; Matsushita, Masafumi; Matsuzaki, Teiichiro; Michimasa, Shin'ichiro; Momiyama, Satoru; Nagamine, Shunsuke; Niikura, Megumi; Ozaki, Tomoyuki; Saito, Atsumi; Saito, Takeshi; Shiga, Yoshiaki; Shikata, Mizuki; Shimizu, Yohei; Shimoura, Susumu; Sumikama, Toshiyuki; Söderström, Pär-Anders; Suzuki, Hiroshi; Takeda, Hiroyuki; Taniuchi, Ryo; Tsubota, Jun'ichi; Watanabe, Yasushi; Wimmer, Kathrin; Yamamoto, Tatsuya; Yoshida, Koichi
2017-09-01
Isotopic production cross sections were measured for proton- and deuteron-induced reactions on 93Nb by means of the inverse kinematics method at RIKEN Radioactive Isotope Beam Factory. The measured production cross sections of residual nuclei in the reaction 93Nb + p at 113 MeV/u were compared with previous data measured by the conventional activation method in the proton energy range between 46 and 249 MeV. The present inverse kinematics data of four reaction products (90Mo, 90Nb, 88Y, and 86Y) were in good agreement with the data of activation measurement. Also, the model calculations with PHITS describing the intra-nuclear cascade and evaporation processes generally well reproduced the measured isotopic production cross sections.
Zahid, Sarwar; Peeler, Crandall; Khan, Naheed; Davis, Joy; Mahmood, Mahdi; Heckenlively, John R; Jayasundera, Thiran
2014-01-01
To develop a reliable and efficient digital method to quantify planimetric Goldmann visual field (GVF) data to monitor disease course and treatment responses in retinal degenerative diseases. A novel method to digitally quantify GVFs using Adobe Photoshop CS3 was developed for comparison to traditional digital planimetry (Placom 45C digital planimeter; Engineer Supply, Lynchburg, Virginia, USA). GVFs from 20 eyes from 10 patients with Stargardt disease were quantified to assess the difference between the two methods (a total of 230 measurements per method). This quantification approach was also applied to 13 patients with X-linked retinitis pigmentosa (XLRP) with mutations in RPGR. Overall, measurements using Adobe Photoshop were more rapidly performed than those using conventional planimetry. Photoshop measurements also exhibited less inter- and intraobserver variability. GVF areas for the I4e isopter in patients with the same mutation in RPGR who were nearby in age had similar qualitative and quantitative areas. Quantification of GVFs using Adobe Photoshop is quicker, more reliable, and less user dependent than conventional digital planimetry. It will be a useful tool for both retrospective and prospective studies of disease course as well as for monitoring treatment response in clinical trials for retinal degenerative diseases.
Impressions of functional food consumers.
Saher, Marieke; Arvola, Anne; Lindeman, Marjaana; Lähteenmäki, Liisa
2004-02-01
Functional foods provide a new way of expressing healthiness in food choices. The objective of this study was to apply an indirect measure to explore what kind of impressions people form of users of functional foods. Respondents (n=350) received one of eight versions of a shopping list and rated the buyer of the foods on 66 bipolar attributes on 7-point scales. The shopping lists had either healthy or neutral background items, conventional or functional target items and the buyer was described either as a 40-year-old woman or man. The attribute ratings revealed three factors: disciplined, innovative and gentle. Buyers with healthy background items were perceived as more disciplined than those having neutral items on the list, users of functional foods were rated as more disciplined than users of conventional target items only when the background list consisted of neutral items. Buyers of functional foods were regarded as more innovative and less gentle, but gender affected the ratings on gentle dimension. The impressions of functional food users clearly differ from those formed of users of conventional foods with a healthy image. The shopping list method performed well as an indirect method, but further studies are required to test its feasibility in measuring other food-related impressions.
Kheyrandish, Ataollah; Mohseni, Madjid; Taghipour, Fariborz
2018-06-15
Determining fluence is essential to derive the inactivation kinetics of microorganisms and to design ultraviolet (UV) reactors for water disinfection. UV light emitting diodes (UV-LEDs) are emerging UV sources with various advantages compared to conventional UV lamps. Unlike conventional mercury lamps, no standard method is available to determine the average fluence of the UV-LEDs, and conventional methods used to determine the fluence for UV mercury lamps are not applicable to UV-LEDs due to the relatively low power output, polychromatic wavelength, and specific radiation profile of UV-LEDs. In this study, a method was developed to determine the average fluence inside a water suspension in a UV-LED experimental setup. In this method, the average fluence was estimated by measuring the irradiance at a few points for a collimated and uniform radiation on a Petri dish surface. New correction parameters were defined and proposed, and several of the existing parameters for determining the fluence of the UV mercury lamp apparatus were revised to measure and quantify the collimation and uniformity of the radiation. To study the effect of polychromatic output and radiation profile of the UV-LEDs, two UV-LEDs with peak wavelengths of 262 and 275 nm and different radiation profiles were selected as the representatives of typical UV-LEDs applied to microbial inactivation. The proper setup configuration for microorganism inactivation studies was also determined based on the defined correction factors.
Structured light system calibration method with optimal fringe angle.
Li, Beiwen; Zhang, Song
2014-11-20
For structured light system calibration, one popular approach is to treat the projector as an inverse camera. This is usually performed by projecting horizontal and vertical sequences of patterns to establish one-to-one mapping between camera points and projector points. However, for a well-designed system, either horizontal or vertical fringe images are not sensitive to depth variation and thus yield inaccurate mapping. As a result, the calibration accuracy is jeopardized if a conventional calibration method is used. To address this limitation, this paper proposes a novel calibration method based on optimal fringe angle determination. Experiments demonstrate that our calibration approach can increase the measurement accuracy up to 38% compared to the conventional calibration method with a calibration volume of 300(H) mm×250(W) mm×500(D) mm.
Otsuka, Makoto; Yamanaka, Azusa; Uchino, Tomohiro; Otsuka, Kuniko; Sadamoto, Kiyomi; Ohshima, Hiroyuki
2012-01-01
To measure the rapid disintegration of Oral Disintegrating Tablets (ODT), a new test (XCT) was developed using X-ray computing tomography (X-ray CT). Placebo ODT, rapid disintegration candy (RDC) and Gaster®-D-Tablets (GAS) were used as model samples. All these ODTs were used to measure oral disintegration time (DT) in distilled water at 37±2°C by XCT. DTs were affected by the width of mesh screens, and degree to which the tablet holder vibrated from air bubbles. An in-vivo tablet disintegration test was performed for RDC using 11 volunteers. DT by the in-vivo method was significantly longer than that using the conventional tester. The experimental conditions for XCT such as the width of the mesh screen and degree of vibration were adjusted to be consistent with human DT values. Since DTs by the XCT method were almost the same as the human data, this method was able to quantitatively evaluate the rapid disintegration of ODT under the same conditions as inside the oral cavity. The DTs of four commercially available ODTs were comparatively evaluated by the XCT method, conventional tablet disintegration test and in-vivo method.
Shi, Joy; Korsiak, Jill; Roth, Daniel E
2018-03-01
We aimed to demonstrate the use of jackknife residuals to take advantage of the longitudinal nature of available growth data in assessing potential biologically implausible values and outliers. Artificial errors were induced in 5% of length, weight, and head circumference measurements, measured on 1211 participants from the Maternal Vitamin D for Infant Growth (MDIG) trial from birth to 24 months of age. Each child's sex- and age-standardized z-score or raw measurements were regressed as a function of age in child-specific models. Each error responsible for a biologically implausible decrease between a consecutive pair of measurements was identified based on the higher of the two absolute values of jackknife residuals in each pair. In further analyses, outliers were identified as those values beyond fixed cutoffs of the jackknife residuals (e.g., greater than +5 or less than -5 in primary analyses). Kappa, sensitivity, and specificity were calculated over 1000 simulations to assess the ability of the jackknife residual method to detect induced errors and to compare these methods with the use of conditional growth percentiles and conventional cross-sectional methods. Among the induced errors that resulted in a biologically implausible decrease in measurement between two consecutive values, the jackknife residual method identified the correct value in 84.3%-91.5% of these instances when applied to the sex- and age-standardized z-scores, with kappa values ranging from 0.685 to 0.795. Sensitivity and specificity of the jackknife method were higher than those of the conditional growth percentile method, but specificity was lower than for conventional cross-sectional methods. Using jackknife residuals provides a simple method to identify biologically implausible values and outliers in longitudinal child growth data sets in which each child contributes at least 4 serial measurements. Crown Copyright © 2018. Published by Elsevier Inc. All rights reserved.
The reliability and validity of a three-camera foot image system for obtaining foot anthropometrics.
O'Meara, Damien; Vanwanseele, Benedicte; Hunt, Adrienne; Smith, Richard
2010-08-01
The purpose was to develop a foot image capture and measurement system with web cameras (the 3-FIS) to provide reliable and valid foot anthropometric measures with efficiency comparable to that of the conventional method of using a handheld anthropometer. Eleven foot measures were obtained from 10 subjects using both methods. Reliability of each method was determined over 3 consecutive days using the intraclass correlation coefficient and root mean square error (RMSE). Reliability was excellent for both the 3-FIS and the handheld anthropometer for the same 10 variables, and good for the fifth metatarsophalangeal joint height. The RMSE values over 3 days ranged from 0.9 to 2.2 mm for the handheld anthropometer, and from 0.8 to 3.6 mm for the 3-FIS. The RMSE values between the 3-FIS and the handheld anthropometer were between 2.3 and 7.4 mm. The 3-FIS required less time to collect and obtain the final variables than the handheld anthropometer. The 3-FIS provided accurate and reproducible results for each of the foot variables and in less time than the conventional approach of a handheld anthropometer.
Radionuclide evaluation of left ventricular function with nonimaging probes.
Wexler, J P; Blaufox, M D
1979-10-01
Portable nonimaging probes have been developed that can evaluate left ventricular function using radionuclide techniques. Two modes of data acquisition are possible with these probe systems, first-pass and gated. Precordial radiocardiograms obtained after a bolus injection can be used to determine cardiac output, pulmonary transit time, pulmonary blood volume, left ventricle ejection fraction, and left-to-right shunts. Gated techniques can be used to determine left ventricular ejection fraction and sytolic time intervals. Probe-determined indices of left ventricular function agree excellently with comparable measurements determined by conventional camera-computer methods as well as by invasive techniques. These have begun to be used in a preliminary manner in a variety of clinical problems associated with left ventricular dysfunction. This review discusses the types of probe systems available, the methods used in positioning them, and details the specifics of their data acquisition and processing capacity. The major criticisms of probe methods are that they are nonimaging and that they measure global rather than regional left ventricular function. In spite of these criticisms, probe systems, because of their portability, high sensitivity, and relatively low cost are useful supplements to conventional camera-computer systems for the measurement of parameters of left ventricular performance using radionuclide techniques.
A simple linear model for estimating ozone AOT40 at forest sites from raw passive sampling data.
Ferretti, Marco; Cristofolini, Fabiana; Cristofori, Antonella; Gerosa, Giacomo; Gottardini, Elena
2012-08-01
A rapid, empirical method is described for estimating weekly AOT40 from ozone concentrations measured with passive samplers at forest sites. The method is based on linear regression and was developed after three years of measurements in Trentino (northern Italy). It was tested against an independent set of data from passive sampler sites across Italy. It provides good weekly estimates compared with those measured by conventional monitors (0.85 ≤R(2)≤ 0.970; 97 ≤ RMSE ≤ 302). Estimates obtained using passive sampling at forest sites are comparable to those obtained by another estimation method based on modelling hourly concentrations (R(2) = 0.94; 131 ≤ RMSE ≤ 351). Regression coefficients of passive sampling are similar to those obtained with conventional monitors at forest sites. Testing against an independent dataset generated by passive sampling provided similar results (0.86 ≤R(2)≤ 0.99; 65 ≤ RMSE ≤ 478). Errors tend to accumulate when weekly AOT40 estimates are summed to obtain the total AOT40 over the May-July period, and the median deviation between the two estimation methods based on passive sampling is 11%. The method proposed does not require any assumptions, complex calculation or modelling technique, and can be useful when other estimation methods are not feasible, either in principle or in practice. However, the method is not useful when estimates of hourly concentrations are of interest.
NASA Astrophysics Data System (ADS)
Zhou, Yunfei; Cai, Hongzhi; Zhong, Liyun; Qiu, Xiang; Tian, Jindong; Lu, Xiaoxu
2017-05-01
In white light scanning interferometry (WLSI), the accuracy of profile measurement achieved with the conventional zero optical path difference (ZOPD) position locating method is closely related with the shape of interference signal envelope (ISE), which is mainly decided by the spectral distribution of illumination source. For a broadband light with Gaussian spectral distribution, the corresponding shape of ISE reveals a symmetric distribution, so the accurate ZOPD position can be achieved easily. However, if the spectral distribution of source is irregular, the shape of ISE will become asymmetric or complex multi-peak distribution, WLSI cannot work well through using ZOPD position locating method. Aiming at this problem, we propose time-delay estimation (TDE) based WLSI method, in which the surface profile information is achieved by using the relative displacement of interference signal between different pixels instead of the conventional ZOPD position locating method. Due to all spectral information of interference signal (envelope and phase) are utilized, in addition to revealing the advantage of high accuracy, the proposed method can achieve profile measurement with high accuracy in the case that the shape of ISE is irregular while ZOPD position locating method cannot work. That is to say, the proposed method can effectively eliminate the influence of source spectrum.
NASA Technical Reports Server (NTRS)
Martindale, W. R.; Carter, L. D.
1975-01-01
Pitot pressure and total-temperature measurements were made in the windward surface shock layer of two 0.0175-scale space shuttle orbiter models at simulated re-entry conditions. Corresponding surface static pressure measurements were also made. Flow properties at the edge of the model boundary layer were derived from these measurements and compared with values calculated using conventional methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Engberg, L; KTH Royal Institute of Technology, Stockholm; Eriksson, K
Purpose: To formulate objective functions of a multicriteria fluence map optimization model that correlate well with plan quality metrics, and to solve this multicriteria model by convex approximation. Methods: In this study, objectives of a multicriteria model are formulated to explicitly either minimize or maximize a dose-at-volume measure. Given the widespread agreement that dose-at-volume levels play important roles in plan quality assessment, these objectives correlate well with plan quality metrics. This is in contrast to the conventional objectives, which are to maximize clinical goal achievement by relating to deviations from given dose-at-volume thresholds: while balancing the new objectives means explicitlymore » balancing dose-at-volume levels, balancing the conventional objectives effectively means balancing deviations. Constituted by the inherently non-convex dose-at-volume measure, the new objectives are approximated by the convex mean-tail-dose measure (CVaR measure), yielding a convex approximation of the multicriteria model. Results: Advantages of using the convex approximation are investigated through juxtaposition with the conventional objectives in a computational study of two patient cases. Clinical goals of each case respectively point out three ROI dose-at-volume measures to be considered for plan quality assessment. This is translated in the convex approximation into minimizing three mean-tail-dose measures. Evaluations of the three ROI dose-at-volume measures on Pareto optimal plans are used to represent plan quality of the Pareto sets. Besides providing increased accuracy in terms of feasibility of solutions, the convex approximation generates Pareto sets with overall improved plan quality. In one case, the Pareto set generated by the convex approximation entirely dominates that generated with the conventional objectives. Conclusion: The initial computational study indicates that the convex approximation outperforms the conventional objectives in aspects of accuracy and plan quality.« less
Measuring Up: Online Technology Assessment Tools Ease the Teacher's Burden and Help Students Learn
ERIC Educational Resources Information Center
Roland, Jennifer
2006-01-01
Standards are a reality in all academic disciplines, and they can be hard to measure using conventional methods. Technology skills in particular are hard to assess using multiple-choice, paper-based tests. A new generation of online assessments of student technology skills allows students to prove proficiency by completing tasks in their natural…
Wang, Chuji; Winstead, Christopher; Duan, Yixiang
2006-05-30
Provided is a novel system for conducting elemental measurements using cavity ring-down spectroscopy (CRDS). The present invention provides sensitivity thousands of times improved over conventional devices and does so with the advantages of low power, low plasma flow rate, and the ability being sustained with various gases.
Validation of a new device to quantify groundwater-surface water exchange
NASA Astrophysics Data System (ADS)
Cremeans, Mackenzie M.; Devlin, J. F.
2017-11-01
Distributions of flow across the groundwater-surface water interface should be expected to be as complex as the geologic deposits associated with stream or lake beds and their underlying aquifers. In these environments, the conventional Darcy-based method of characterizing flow systems (near streams) has significant limitations, including reliance on parameters with high uncertainties (e.g., hydraulic conductivity), the common use of drilled wells in the case of streambank investigations, and potentially lengthy measurement times for aquifer characterization and water level measurements. Less logistically demanding tools for quantifying exchanges across streambeds have been developed and include drive-point mini-piezometers, seepage meters, and temperature profiling tools. This project adds to that toolbox by introducing the Streambed Point Velocity Probe (SBPVP), a reusable tool designed to quantify groundwater-surface water interactions (GWSWI) at the interface with high density sampling, which can effectively, rapidly, and accurately complement conventional methods. The SBPVP is a direct push device that measures in situ water velocities at the GWSWI with a small-scale tracer test on the probe surface. Tracer tests do not rely on hydraulic conductivity or gradient information, nor do they require long equilibration times. Laboratory testing indicated that the SBPVP has an average accuracy of ± 3% and an average precision of ± 2%. Preliminary field testing, conducted in the Grindsted Å in Jutland, Denmark, yielded promising agreement between groundwater fluxes determined by conventional methods and those estimated from the SBPVP tests executed at similar scales. These results suggest the SBPVP is a viable tool to quantify groundwater-surface water interactions in high definition in sandy streambeds.
Rayarao, Geetha; Biederman, Robert W W; Williams, Ronald B; Yamrozik, June A; Lombardi, Richard; Doyle, Mark
2018-01-01
To establish the clinical validity and accuracy of automatic thresholding and manual trimming (ATMT) by comparing the method with the conventional contouring method for in vivo cardiac volume measurements. CMR was performed on 40 subjects (30 patients and 10 controls) using steady-state free precession cine sequences with slices oriented in the short-axis and acquired contiguously from base to apex. Left ventricular (LV) volumes, end-diastolic volume, end-systolic volume, and stroke volume (SV) were obtained with ATMT and with the conventional contouring method. Additionally, SV was measured independently using CMR phase velocity mapping (PVM) of the aorta for validation. Three methods of calculating SV were compared by applying Bland-Altman analysis. The Bland-Altman standard deviation of variation (SD) and offset bias for LV SV for the three sets of data were: ATMT-PVM (7.65, [Formula: see text]), ATMT-contours (7.85, [Formula: see text]), and contour-PVM (11.01, 4.97), respectively. Equating the observed range to the error contribution of each approach, the error magnitude of ATMT:PVM:contours was in the ratio 1:2.4:2.5. Use of ATMT for measuring ventricular volumes accommodates trabeculae and papillary structures more intuitively than contemporary contouring methods. This results in lower variation when analyzing cardiac structure and function and consequently improved accuracy in assessing chamber volumes.
Vibration measurement with nonlinear converter in the presence of noise
NASA Astrophysics Data System (ADS)
Mozuras, Almantas
2017-10-01
Conventional vibration measurement methods use the linear properties of physical converters. These methods are strongly influenced by nonlinear distortions, because ideal linear converters are not available. Practically, any converter can be considered as a linear one, when an output signal is very small. However, the influence of noise increases significantly and signal-to-noise ratio decreases at lower signals. When the output signal is increasing, the nonlinear distortions are also augmenting. If the wide spectrum vibration is measured, conventional methods face a harmonic distortion as well as intermodulation effects. Purpose of this research is to develop a measurement method of wide spectrum vibration by using a converter described by a nonlinear function of type f(x), where x =x(t) denotes the dependence of coordinate x on time t due to the vibration. Parameter x(t) describing the vibration is expressed as Fourier series. The spectral components of the converter output f(x(t)) are determined by using Fourier transform. The obtained system of nonlinear equations is solved using the least squares technique that permits to find x(t) in the presence of noise. This method allows one to carry out the absolute or relative vibration measurements. High resistance to noise is typical for the absolute vibration measurement, but it is necessary to know the Taylor expansion coefficients of the function f(x). If the Taylor expansion is not known, the relative measurement of vibration parameters is also possible, but with lower resistance to noise. This method allows one to eliminate the influence of nonlinear distortions to the measurement results, and consequently to eliminate harmonic distortion and intermodulation effects. The use of nonlinear properties of the converter for measurement gives some advantages related to an increased frequency range of the output signal (consequently increasing the number of equations) that allows one to decrease the noise influence on the measurement results. The greater is the nonlinearity the lower is noise. This method enables the use of the converters that are normally not suitable due to the high nonlinearity.
Modeling the Relative GHG Emissions of Conventional and Shale Gas Production
2011-01-01
Recent reports show growing reserves of unconventional gas are available and that there is an appetite from policy makers, industry, and others to better understand the GHG impact of exploiting reserves such as shale gas. There is little publicly available data comparing unconventional and conventional gas production. Existing studies rely on national inventories, but it is not generally possible to separate emissions from unconventional and conventional sources within these totals. Even if unconventional and conventional sites had been listed separately, it would not be possible to eliminate site-specific factors to compare gas production methods on an equal footing. To address this difficulty, the emissions of gas production have instead been modeled. In this way, parameters common to both methods of production can be held constant, while allowing those parameters which differentiate unconventional gas and conventional gas production to vary. The results are placed into the context of power generation, to give a ″well-to-wire″ (WtW) intensity. It was estimated that shale gas typically has a WtW emissions intensity about 1.8–2.4% higher than conventional gas, arising mainly from higher methane releases in well completion. Even using extreme assumptions, it was found that WtW emissions from shale gas need be no more than 15% higher than conventional gas if flaring or recovery measures are used. In all cases considered, the WtW emissions of shale gas powergen are significantly lower than those of coal. PMID:22085088
Modeling the relative GHG emissions of conventional and shale gas production.
Stephenson, Trevor; Valle, Jose Eduardo; Riera-Palou, Xavier
2011-12-15
Recent reports show growing reserves of unconventional gas are available and that there is an appetite from policy makers, industry, and others to better understand the GHG impact of exploiting reserves such as shale gas. There is little publicly available data comparing unconventional and conventional gas production. Existing studies rely on national inventories, but it is not generally possible to separate emissions from unconventional and conventional sources within these totals. Even if unconventional and conventional sites had been listed separately, it would not be possible to eliminate site-specific factors to compare gas production methods on an equal footing. To address this difficulty, the emissions of gas production have instead been modeled. In this way, parameters common to both methods of production can be held constant, while allowing those parameters which differentiate unconventional gas and conventional gas production to vary. The results are placed into the context of power generation, to give a ″well-to-wire″ (WtW) intensity. It was estimated that shale gas typically has a WtW emissions intensity about 1.8-2.4% higher than conventional gas, arising mainly from higher methane releases in well completion. Even using extreme assumptions, it was found that WtW emissions from shale gas need be no more than 15% higher than conventional gas if flaring or recovery measures are used. In all cases considered, the WtW emissions of shale gas powergen are significantly lower than those of coal.
Modified neural networks for rapid recovery of tokamak plasma parameters for real time control
NASA Astrophysics Data System (ADS)
Sengupta, A.; Ranjan, P.
2002-07-01
Two modified neural network techniques are used for the identification of the equilibrium plasma parameters of the Superconducting Steady State Tokamak I from external magnetic measurements. This is expected to ultimately assist in a real time plasma control. As different from the conventional network structure where a single network with the optimum number of processing elements calculates the outputs, a multinetwork system connected in parallel does the calculations here in one of the methods. This network is called the double neural network. The accuracy of the recovered parameters is clearly more than the conventional network. The other type of neural network used here is based on the statistical function parametrization combined with a neural network. The principal component transformation removes linear dependences from the measurements and a dimensional reduction process reduces the dimensionality of the input space. This reduced and transformed input set, rather than the entire set, is fed into the neural network input. This is known as the principal component transformation-based neural network. The accuracy of the recovered parameters in the latter type of modified network is found to be a further improvement over the accuracy of the double neural network. This result differs from that obtained in an earlier work where the double neural network showed better performance. The conventional network and the function parametrization methods have also been used for comparison. The conventional network has been used for an optimization of the set of magnetic diagnostics. The effective set of sensors, as assessed by this network, are compared with the principal component based network. Fault tolerance of the neural networks has been tested. The double neural network showed the maximum resistance to faults in the diagnostics, while the principal component based network performed poorly. Finally the processing times of the methods have been compared. The double network and the principal component network involve the minimum computation time, although the conventional network also performs well enough to be used in real time.
NASA Astrophysics Data System (ADS)
Matsumoto, Takahiro; Nagata, Yasuaki; Nose, Tetsuro; Kawashima, Katsuhiro
2001-06-01
We show two kinds of demonstrations using a laser ultrasonic method. First, we present the results of Young's modulus of ceramics at temperatures above 1600 °C. Second, we introduce the method to determine the internal temperature distribution of a hot steel plate with errors of less than 3%. We compare the results obtained by this laser ultrasonic method with conventional contact techniques to show the validity of this method.
Moral Judgement Changes among Undergraduates in a Capstone Internship Experience
ERIC Educational Resources Information Center
Craig, Patricia J.; Oja, Sharon Nodie
2013-01-01
This mixed-methods study explored the moral growth of undergraduates in a recreation management internship experience. The quantitative phase reported moral judgement gains in Personal Interest and Post-conventional schema, and N-2 scores, as measured by the Defining Issues Test 2 (DIT-2), among 33 interns. The case-study method used a pattern…
Using Touchscreens as Position Detectors in Physics Experiments
ERIC Educational Resources Information Center
Dilek, Ufuk; Sengören, Serap Kaya
2017-01-01
The position of a ball was measured by using the touchscreen of a mobile phone during its rolling motion. The translational speed of the ball was determined using the recorded position and time data. The speed was also calculated by a conventional method. The speed values determined by the two methods were consistent, thus it was proven that a…
John B. Loomis; George Peterson; Patricia A. Champ; Thomas C. Brown; Beatrice Lucero
1998-01-01
Estimating empirical measures of an individual's willingness to accept that are consistent with conventional economic theory, has proven difficult. The method of paired comparison offers a promising approach to estimate willingness to accept. This method involves having individuals make binary choices between receiving a particular good or a sum of money....
Shen, Chuanlai; Xu, Tao; Wu, You; Li, Xiaoe; Xia, Lingzhi; Wang, Wei; Shahzad, Khawar Ali; Zhang, Lei; Wan, Xin; Qiu, Jie
2017-11-27
Conventional peptide-major histocompatibility complex (pMHC) multimer staining, intracellular cytokine staining, and enzyme-linked immunospot (ELISPOT) assay cannot concurrently determine the frequency and reactivity of antigen-specific T cells (AST) in a single assay. In this report, pMHC multimer, magnetic-activated cell sorting (MACS), and ELISPOT techniques have been integrated into a micro well by coupling pMHC multimers onto cell-sized magnetic beads to characterize AST cell populations in a 96-well microplate which pre-coated with cytokine-capture antibodies. This method, termed AAPC-microplate, allows the enumeration and local cytokine production of AST cells in a single assay without using flow cytometry or fluorescence intensity scanning, thus will be widely applicable. Here, ovalbumin 257-264 -specific CD8 + T cells from OT-1 T cell receptor (TCR) transgenic mice were measured. The methodological accuracy, specificity, reproducibility, and sensitivity in enumerating AST cells compared well with conventional pMHC multimer staining. Furthermore, the AAPC-microplate was applied to detect the frequency and reactivity of Hepatitis B virus (HBV) core antigen 18-27 - and surface antigen 183-191 -specific CD8 + T cells for the patients, and was compared with conventional method. This method without the need of high-end instruments may facilitate the routine analysis of patient-specific cellular immune response pattern to a given antigen in translational studies.
NASA Astrophysics Data System (ADS)
Dörr, Dominik; Joppich, Tobias; Schirmaier, Fabian; Mosthaf, Tobias; Kärger, Luise; Henning, Frank
2016-10-01
Thermoforming of continuously fiber reinforced thermoplastics (CFRTP) is ideally suited to thin walled and complex shaped products. By means of forming simulation, an initial validation of the producibility of a specific geometry, an optimization of the forming process and the prediction of fiber-reorientation due to forming is possible. Nevertheless, applied methods need to be validated. Therefor a method is presented, which enables the calculation of error measures for the mismatch between simulation results and experimental tests, based on measurements with a conventional coordinate measuring device. As a quantitative measure, describing the curvature is provided, the presented method is also suitable for numerical or experimental sensitivity studies on wrinkling behavior. The applied methods for forming simulation, implemented in Abaqus explicit, are presented and applied to a generic geometry. The same geometry is tested experimentally and simulation and test results are compared by the proposed validation method.
Noncontact Measurement of Doping Profile for Bare Silicon
NASA Astrophysics Data System (ADS)
Kohno, Motohiro; Matsubara, Hideaki; Okada, Hiroshi; Hirae, Sadao; Sakai, Takamasa
1998-10-01
In this study, we evaluate the doping concentrations of bare silicon wafers by noncontact capacitance voltage (C V) measurements. The metal-air-insulator-semiconductor (MAIS) method enables the measurement of C V characteristics of silicon wafers without oxidation and electrode preparation. This method has the advantage that a doping profile close to the wafer surface can be obtained. In our experiment, epitaxial silicon wafers were used to compare the MAIS method with the conventional MIS method. The experimental results obtained from the two methods showed good agreement. Then, doping profiles of boron-doped Czochralski (CZ) wafers were measured by the MAIS method. The result indicated a significant reduction of the doping concentration near the wafer surface. This observation is attributed to the well-known deactivation of boron with atomic hydrogen which permeated the silicon bulk during the polishing process. This deactivation was recovered by annealing in air at 180°C for 120 min.
Estimating intercellular surface tension by laser-induced cell fusion.
Fujita, Masashi; Onami, Shuichi
2011-12-01
Intercellular surface tension is a key variable in understanding cellular mechanics. However, conventional methods are not well suited for measuring the absolute magnitude of intercellular surface tension because these methods require determination of the effective viscosity of the whole cell, a quantity that is difficult to measure. In this study, we present a novel method for estimating the intercellular surface tension at single-cell resolution. This method exploits the cytoplasmic flow that accompanies laser-induced cell fusion when the pressure difference between cells is large. Because the cytoplasmic viscosity can be measured using well-established technology, this method can be used to estimate the absolute magnitudes of tension. We applied this method to two-cell-stage embryos of the nematode Caenorhabditis elegans and estimated the intercellular surface tension to be in the 30-90 µN m(-1) range. Our estimate was in close agreement with cell-medium surface tensions measured at single-cell resolution.
Practical uncertainty reduction and quantification in shock physics measurements
Akin, M. C.; Nguyen, J. H.
2015-04-20
We report the development of a simple error analysis sampling method for identifying intersections and inflection points to reduce total uncertainty in experimental data. This technique was used to reduce uncertainties in sound speed measurements by 80% over conventional methods. Here, we focused on its impact on a previously published set of Mo sound speed data and possible implications for phase transition and geophysical studies. However, this technique's application can be extended to a wide range of experimental data.
Saito, Masatoshi
2009-08-01
Dual-energy computed tomography (DECT) has the potential for measuring electron density distribution in a human body to predict the range of particle beams for treatment planning in proton or heavy-ion radiotherapy. However, thus far, a practical dual-energy method that can be used to precisely determine electron density for treatment planning in particle radiotherapy has not been developed. In this article, another DECT technique involving a balanced filter method using a conventional x-ray tube is described. For the spectral optimization of DECT using balanced filters, the author calculates beam-hardening error and air kerma required to achieve a desired noise level in electron density and effective atomic number images of a cylindrical water phantom with 50 cm diameter. The calculation enables the selection of beam parameters such as tube voltage, balanced filter material, and its thickness. The optimized parameters were applied to cases with different phantom diameters ranging from 5 to 50 cm for the calculations. The author predicts that the optimal combination of tube voltages would be 80 and 140 kV with Tb/Hf and Bi/Mo filter pairs for the 50-cm-diameter water phantom. When a single phantom calibration at a diameter of 25 cm was employed to cover all phantom sizes, maximum absolute beam-hardening errors were 0.3% and 0.03% for electron density and effective atomic number, respectively, over a range of diameters of the water phantom. The beam-hardening errors were 1/10 or less as compared to those obtained by conventional DECT, although the dose was twice that of the conventional DECT case. From the viewpoint of beam hardening and the tube-loading efficiency, the present DECT using balanced filters would be significantly more effective in measuring the electron density than the conventional DECT. Nevertheless, further developments of low-exposure imaging technology should be necessary as well as x-ray tubes with higher outputs to apply DECT coupled with the balanced filter method for clinical use.
DOT National Transportation Integrated Search
1971-04-01
An automated fluorometric trihydroxyindole procedure is described for the measurement of norepinephrine (NE) and epinephrine (E) in blood plasma or urine. The method employs conventional techniques for isolation of the catecholamines by alumina colum...
NASA Astrophysics Data System (ADS)
Jiao, Jieqing; Salinas, Cristian A.; Searle, Graham E.; Gunn, Roger N.; Schnabel, Julia A.
2012-02-01
Dynamic Positron Emission Tomography is a powerful tool for quantitative imaging of in vivo biological processes. The long scan durations necessitate motion correction, to maintain the validity of the dynamic measurements, which can be particularly challenging due to the low signal-to-noise ratio (SNR) and spatial resolution, as well as the complex tracer behaviour in the dynamic PET data. In this paper we develop a novel automated expectation-maximisation image registration framework that incorporates temporal tracer kinetic information to correct for inter-frame subject motion during dynamic PET scans. We employ the Zubal human brain phantom to simulate dynamic PET data using SORTEO (a Monte Carlo-based simulator), in order to validate the proposed method for its ability to recover imposed rigid motion. We have conducted a range of simulations using different noise levels, and corrupted the data with a range of rigid motion artefacts. The performance of our motion correction method is compared with pairwise registration using normalised mutual information as a voxel similarity measure (an approach conventionally used to correct for dynamic PET inter-frame motion based solely on intensity information). To quantify registration accuracy, we calculate the target registration error across the images. The results show that our new dynamic image registration method based on tracer kinetics yields better realignment of the simulated datasets, halving the target registration error when compared to the conventional method at small motion levels, as well as yielding smaller residuals in translation and rotation parameters. We also show that our new method is less affected by the low signal in the first few frames, which the conventional method based on normalised mutual information fails to realign.
Real-time line-width measurements: a new feature for reticle inspection systems
NASA Astrophysics Data System (ADS)
Eran, Yair; Greenberg, Gad; Joseph, Amnon; Lustig, Cornel; Mizrahi, Eyal
1997-07-01
The significance of line width control in mask production has become greater with the lessening of defect size. There are two conventional methods used for controlling line widths dimensions which employed in the manufacturing of masks for sub micron devices. These two methods are the critical dimensions (CD) measurement and the detection of edge defects. Achieving reliable and accurate control of line width errors is one of the most challenging tasks in mask production. Neither of the two methods cited above (namely CD measurement and the detection of edge defects) guarantees the detection of line width errors with good sensitivity over the whole mask area. This stems from the fact that CD measurement provides only statistical data on the mask features whereas applying edge defect detection method checks defects on each edge by itself, and does not supply information on the combined result of error detection on two adjacent edges. For example, a combination of a small edge defect together with a CD non- uniformity which are both within the allowed tolerance, may yield a significant line width error, which will not be detected using the conventional methods (see figure 1). A new approach for the detection of line width errors which overcomes this difficulty is presented. Based on this approach, a new sensitive line width error detector was developed and added to Orbot's RT-8000 die-to-database reticle inspection system. This innovative detector operates continuously during the mask inspection process and scans (inspects) the entire area of the reticle for line width errors. The detection is based on a comparison of measured line width that are taken on both the design database and the scanned image of the reticle. In section 2, the motivation for developing this new detector is presented. The section covers an analysis of various defect types, which are difficult to detect using conventional edge detection methods or, alternatively, CD measurements. In section 3, the basic concept of the new approach is introduced together with a description of the new detector and its characteristics. In section 4, the calibration process that took place in order to achieve reliable and repeatable line width measurements is presented. The description of an experiments conducted in order to evaluate the sensitivity of the new detector is given in section 5, followed by a report of the results of this evaluation. The conclusions are presented in section 6.
Géczi, Gábor; Horváth, Márk; Kaszab, Tímea; Alemany, Gonzalo Garnacho
2013-01-01
Extension of shelf life and preservation of products are both very important for the food industry. However, just as with other processes, speed and higher manufacturing performance are also beneficial. Although microwave heating is utilized in a number of industrial processes, there are many unanswered questions about its effects on foods. Here we analyze whether the effects of microwave heating with continuous flow are equivalent to those of traditional heat transfer methods. In our study, the effects of heating of liquid foods by conventional and continuous flow microwave heating were studied. Among other properties, we compared the stability of the liquid foods between the two heat treatments. Our goal was to determine whether the continuous flow microwave heating and the conventional heating methods have the same effects on the liquid foods, and, therefore, whether microwave heat treatment can effectively replace conventional heat treatments. We have compared the colour, separation phenomena of the samples treated by different methods. For milk, we also monitored the total viable cell count, for orange juice, vitamin C contents in addition to the taste of the product by sensory analysis. The majority of the results indicate that the circulating coil microwave method used here is equivalent to the conventional heating method based on thermal conduction and convection. However, some results in the analysis of the milk samples show clear differences between heat transfer methods. According to our results, the colour parameters (lightness, red-green and blue-yellow values) of the microwave treated samples differed not only from the untreated control, but also from the traditional heat treated samples. The differences are visually undetectable, however, they become evident through analytical measurement with spectrophotometer. This finding suggests that besides thermal effects, microwave-based food treatment can alter product properties in other ways as well.
Géczi, Gábor; Horváth, Márk; Kaszab, Tímea; Alemany, Gonzalo Garnacho
2013-01-01
Extension of shelf life and preservation of products are both very important for the food industry. However, just as with other processes, speed and higher manufacturing performance are also beneficial. Although microwave heating is utilized in a number of industrial processes, there are many unanswered questions about its effects on foods. Here we analyze whether the effects of microwave heating with continuous flow are equivalent to those of traditional heat transfer methods. In our study, the effects of heating of liquid foods by conventional and continuous flow microwave heating were studied. Among other properties, we compared the stability of the liquid foods between the two heat treatments. Our goal was to determine whether the continuous flow microwave heating and the conventional heating methods have the same effects on the liquid foods, and, therefore, whether microwave heat treatment can effectively replace conventional heat treatments. We have compared the colour, separation phenomena of the samples treated by different methods. For milk, we also monitored the total viable cell count, for orange juice, vitamin C contents in addition to the taste of the product by sensory analysis. The majority of the results indicate that the circulating coil microwave method used here is equivalent to the conventional heating method based on thermal conduction and convection. However, some results in the analysis of the milk samples show clear differences between heat transfer methods. According to our results, the colour parameters (lightness, red-green and blue-yellow values) of the microwave treated samples differed not only from the untreated control, but also from the traditional heat treated samples. The differences are visually undetectable, however, they become evident through analytical measurement with spectrophotometer. This finding suggests that besides thermal effects, microwave-based food treatment can alter product properties in other ways as well. PMID:23341982
Bacterial aerosol emission rates from municipal wastewater aeration tanks.
Sawyer, B; Elenbogen, G; Rao, K C; O'Brien, P; Zenz, D R; Lue-Hing, C
1993-01-01
In this report we describe the results of a study conducted to determine the rates of bacterial aerosol emission from the surfaces of the aeration tanks of the Metropolitan Water Reclamation District of Greater Chicago John E. Egan Water Reclamation Plant. This study was accomplished by conducting test runs in which Andersen six-stage viable samplers were used to collect bacterial aerosol samples inside a walled tower positioned above an aeration tank liquid surface at the John E. Egan Water Reclamation Plant. The samples were analyzed for standard plate counts (SPC), total coliforms (TC), fecal coliforms, and fecal streptococci. Two methods of calculation were used to estimate the bacterial emission rate. The first method was a conventional stack emission rate calculation method in which the measured air concentration of bacteria was multiplied by the air flow rate emanating from the aeration tanks. The second method was a more empirical method in which an attempt was made to measure all of the bacteria emanating from an isolated area (0.37 m2) of the aeration tank surface over time. The data from six test runs were used to determine bacterial emission rates by both calculation methods. As determined by the conventional calculation method, the average SPC emission rate was 1.61 SPC/m2/s (range, 0.66 to 2.65 SPC/m2/s). As determined by the empirical calculation method, the average SPC emission rate was 2.18 SPC/m2/s (range, 1.25 to 2.66 SPC/m2/s). For TC, the average emission rate was 0.20 TC/m2/s (range, 0.02 to 0.40 TC/m2/s) when the conventional calculation method was used and 0.27 TC/m2/s (range, 0.04 to 0.53 TC/m2/s) when the empirical calculation method was used.(ABSTRACT TRUNCATED AT 250 WORDS) PMID:8250547
Illés, Tamás; Somoskeöy, Szabolcs
2013-06-01
A new concept of vertebra vectors based on spinal three-dimensional (3D) reconstructions of images from the EOS system, a new low-dose X-ray imaging device, was recently proposed to facilitate interpretation of EOS 3D data, especially with regard to horizontal plane images. This retrospective study was aimed at the evaluation of the spinal layout visualized by EOS 3D and vertebra vectors before and after surgical correction, the comparison of scoliotic spine measurement values based on 3D vertebra vectors with measurements using conventional two-dimensional (2D) methods, and an evaluation of horizontal plane vector parameters for their relationship with the magnitude of scoliotic deformity. 95 patients with adolescent idiopathic scoliosis operated according to the Cotrel-Dubousset principle were subjected to EOS X-ray examinations pre- and postoperatively, followed by 3D reconstructions and generation of vertebra vectors in a calibrated coordinate system to calculate vector coordinates and parameters, as published earlier. Differences in values of conventional 2D Cobb methods and methods based on vertebra vectors were evaluated by means comparison T test and relationship of corresponding parameters was analysed by bivariate correlation. Relationship of horizontal plane vector parameters with the magnitude of scoliotic deformities and results of surgical correction were analysed by Pearson correlation and linear regression. In comparison to manual 2D methods, a very close relationship was detectable in vertebra vector-based curvature data for coronal curves (preop r 0.950, postop r 0.935) and thoracic kyphosis (preop r 0.893, postop r 0.896), while the found small difference in L1-L5 lordosis values (preop r 0.763, postop r 0.809) was shown to be strongly related to the magnitude of corresponding L5 wedge. The correlation analysis results revealed strong correlation between the magnitude of scoliosis and the lateral translation of apical vertebra in horizontal plane. The horizontal plane coordinates of the terminal and initial points of apical vertebra vectors represent this (r 0.701; r 0.667). Less strong correlation was detected in the axial rotation of apical vertebras and the magnitudes of the frontal curves (r 0.459). Vertebra vectors provide a key opportunity to visualize spinal deformities in all three planes simultaneously. Measurement methods based on vertebral vectors proved to be just as accurate and reliable as conventional measurement methods for coronal and sagittal plane parameters. In addition, the horizontal plane display of the curves can be studied using the same vertebra vectors. Based on the vertebra vectors data, during the surgical treatment of spinal deformities, the diminution of the lateral translation of the vertebras seems to be more important in the results of the surgical correction than the correction of the axial rotation.
Vojdani, M; Torabi, K; Farjood, E; Khaledi, Aar
2013-09-01
Metal-ceramic crowns are most commonly used as the complete coverage restorations in clinical daily use. Disadvantages of conventional hand-made wax-patterns introduce some alternative ways by means of CAD/CAM technologies. This study compares the marginal and internal fit of copings cast from CAD/CAM and conventional fabricated wax-patterns. Twenty-four standardized brass dies were prepared and randomly divided into 2 groups according to the wax-patterns fabrication method (CAD/CAM technique and conventional method) (n=12). All the wax-patterns were fabricated in a standard fashion by means of contour, thickness and internal relief (M1-M12: representative of CAD/CAM group, C1-C12: representative of conventional group). CAD/CAM milling machine (Cori TEC 340i; imes-icore GmbH, Eiterfeld, Germany) was used to fabricate the CAD/CAM group wax-patterns. The copings cast from 24 wax-patterns were cemented to the corresponding dies. For all the coping-die assemblies cross-sectional technique was used to evaluate the marginal and internal fit at 15 points. The Student's t- test was used for statistical analysis (α=0.05). The overall mean (SD) for absolute marginal discrepancy (AMD) was 254.46 (25.10) um for CAD/CAM group and 88.08(10.67) um for conventional group (control). The overall mean of internal gap total (IGT) was 110.77(5.92) um for CAD/CAM group and 76.90 (10.17) um for conventional group. The Student's t-test revealed significant differences between 2 groups. Marginal and internal gaps were found to be significantly higher at all measured areas in CAD/CAM group than conventional group (p< 0.001). Within limitations of this study, conventional method of wax-pattern fabrication produced copings with significantly better marginal and internal fit than CAD/CAM (machine-milled) technique. All the factors for 2 groups were standardized except wax pattern fabrication technique, therefore, only the conventional group results in copings with clinically acceptable margins of less than 120um.
Farjood, Ehsan; Vojdani, Mahroo; Torabi, Kiyanoosh; Khaledi, Amir Ali Reza
2017-01-01
Given the limitations of conventional waxing, computer-aided design and computer-aided manufacturing (CAD-CAM) technologies have been developed as alternative methods of making patterns. The purpose of this in vitro study was to compare the marginal and internal fit of metal copings derived from wax patterns fabricated by rapid prototyping (RP) to those created by the conventional handmade technique. Twenty-four standardized brass dies were milled and divided into 2 groups (n=12) according to the wax pattern fabrication method. The CAD-RP group was assigned to the experimental group, and the conventional group to the control group. The cross-sectional technique was used to assess the marginal and internal discrepancies at 15 points on the master die by using a digital microscope. An independent t test was used for statistical analysis (α=.01). The CAD-RP group had a total mean (±SD) for absolute marginal discrepancy of 117.1 (±11.5) μm and a mean marginal discrepancy of 89.8 (±8.3) μm. The conventional group had an absolute marginal discrepancy 88.1 (±10.7) μm and a mean marginal discrepancy of 69.5 (±15.6) μm. The overall mean (±SD) of the total internal discrepancy, separately calculated as the axial internal discrepancy and occlusal internal discrepancy, was 95.9 (±8.0) μm for the CAD-RP group and 76.9 (±10.2) μm for the conventional group. The independent t test results showed significant differences between the 2 groups. The CAD-RP group had larger discrepancies at all measured areas than the conventional group, which was statistically significant (P<.01). Within the limitations of this in vitro study, the conventional method of wax pattern fabrication produced copings with better marginal and internal fit than the CAD-RP method. However, the marginal and internal fit for both groups were within clinically acceptable ranges. Copyright © 2016 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.
QUEST+: A general multidimensional Bayesian adaptive psychometric method.
Watson, Andrew B
2017-03-01
QUEST+ is a Bayesian adaptive psychometric testing method that allows an arbitrary number of stimulus dimensions, psychometric function parameters, and trial outcomes. It is a generalization and extension of the original QUEST procedure and incorporates many subsequent developments in the area of parametric adaptive testing. With a single procedure, it is possible to implement a wide variety of experimental designs, including conventional threshold measurement; measurement of psychometric function parameters, such as slope and lapse; estimation of the contrast sensitivity function; measurement of increment threshold functions; measurement of noise-masking functions; Thurstone scale estimation using pair comparisons; and categorical ratings on linear and circular stimulus dimensions. QUEST+ provides a general method to accelerate data collection in many areas of cognitive and perceptual science.
Lower conjunctival fornix packing for mydriasis in premature infants: a randomized trial
Thanathanee, Onsiri; Ratanapakorn, Tanapat; Morley, Michael G; Yospaiboon, Yosanan
2012-01-01
Objective To compare the mydriatic effect of lower conjunctival fornix packing to conventional instillation of eyedrops containing 2.5% phenylephrine and 1% tropicamide in premature infants undergoing examination for retinopathy of prematurity. Methods The patients were randomized to receive either conventional instillation of mydriatic drops or lower conjunctival fornix packing in one eye and the alternate method in the fellow eye. For the eyes receiving lower conjunctival fornix packing (study group), one small piece of the cotton wool soaked with one drop of 2.5% phenylephrine and one drop of 1% tropicamide was packed in the lower conjunctival fornix for 15 minutes. For the eyes receiving the conventional instillation (control group), 2.5% phenylephrine and 1% tropicamide were alternately instilled every 5 minutes for two doses each. Horizontal pupil diameter was measured with a ruler in millimeters 40 minutes later. Results The mean dilated pupil diameter in study group and control group were 5.76 ± 1.01 mm and 4.50 ± 1.08 mm, respectively. This difference was statistically significant (P < 0.05). Conclusion The dilated pupil diameter after receiving the lower conjunctival fornix packing was larger than conventional instillation with a statistically significant difference. We recommended the packing method to dilate the preterm infant pupil, especially if the pupil is difficult to dilate. PMID:22368443
A probability-based multi-cycle sorting method for 4D-MRI: A simulation study.
Liang, Xiao; Yin, Fang-Fang; Liu, Yilin; Cai, Jing
2016-12-01
To develop a novel probability-based sorting method capable of generating multiple breathing cycles of 4D-MRI images and to evaluate performance of this new method by comparing with conventional phase-based methods in terms of image quality and tumor motion measurement. Based on previous findings that breathing motion probability density function (PDF) of a single breathing cycle is dramatically different from true stabilized PDF that resulted from many breathing cycles, it is expected that a probability-based sorting method capable of generating multiple breathing cycles of 4D images may capture breathing variation information missing from conventional single-cycle sorting methods. The overall idea is to identify a few main breathing cycles (and their corresponding weightings) that can best represent the main breathing patterns of the patient and then reconstruct a set of 4D images for each of the identified main breathing cycles. This method is implemented in three steps: (1) The breathing signal is decomposed into individual breathing cycles, characterized by amplitude, and period; (2) individual breathing cycles are grouped based on amplitude and period to determine the main breathing cycles. If a group contains more than 10% of all breathing cycles in a breathing signal, it is determined as a main breathing pattern group and is represented by the average of individual breathing cycles in the group; (3) for each main breathing cycle, a set of 4D images is reconstructed using a result-driven sorting method adapted from our previous study. The probability-based sorting method was first tested on 26 patients' breathing signals to evaluate its feasibility of improving target motion PDF. The new method was subsequently tested for a sequential image acquisition scheme on the 4D digital extended cardiac torso (XCAT) phantom. Performance of the probability-based and conventional sorting methods was evaluated in terms of target volume precision and accuracy as measured by the 4D images, and also the accuracy of average intensity projection (AIP) of 4D images. Probability-based sorting showed improved similarity of breathing motion PDF from 4D images to reference PDF compared to single cycle sorting, indicated by the significant increase in Dice similarity coefficient (DSC) (probability-based sorting, DSC = 0.89 ± 0.03, and single cycle sorting, DSC = 0.83 ± 0.05, p-value <0.001). Based on the simulation study on XCAT, the probability-based method outperforms the conventional phase-based methods in qualitative evaluation on motion artifacts and quantitative evaluation on tumor volume precision and accuracy and accuracy of AIP of the 4D images. In this paper the authors demonstrated the feasibility of a novel probability-based multicycle 4D image sorting method. The authors' preliminary results showed that the new method can improve the accuracy of tumor motion PDF and the AIP of 4D images, presenting potential advantages over the conventional phase-based sorting method for radiation therapy motion management.
Complementary and Alternative Approaches to Pain Relief During Labor
Theau-Yonneau, Anne
2007-01-01
This review evaluated the effect of complementary and alternative medicine on pain during labor with conventional scientific methods using electronic data bases through 2006 were used. Only randomized controlled trials with outcome measures for labor pain were kept for the conclusions. Many studies did not meet the scientific inclusion criteria. According to the randomized control trials, we conclude that for the decrease of labor pain and/or reduction of the need for conventional analgesic methods: (i) There is an efficacy found for acupressure and sterile water blocks. (ii) Most results favored some efficacy for acupuncture and hydrotherapy. (iii) Studies for other complementary or alternative therapies for labor pain control have not shown their effectiveness. PMID:18227907
Schneider, Falk; Waithe, Dominic; Galiani, Silvia; Bernardino de la Serna, Jorge; Sezgin, Erdinc; Eggeling, Christian
2018-06-19
The diffusion dynamics in the cellular plasma membrane provide crucial insights into molecular interactions, organization, and bioactivity. Beam-scanning fluorescence correlation spectroscopy combined with super-resolution stimulated emission depletion nanoscopy (scanning STED-FCS) measures such dynamics with high spatial and temporal resolution. It reveals nanoscale diffusion characteristics by measuring the molecular diffusion in conventional confocal mode and super-resolved STED mode sequentially for each pixel along the scanned line. However, to directly link the spatial and the temporal information, a method that simultaneously measures the diffusion in confocal and STED modes is needed. Here, to overcome this problem, we establish an advanced STED-FCS measurement method, line interleaved excitation scanning STED-FCS (LIESS-FCS), that discloses the molecular diffusion modes at different spatial positions with a single measurement. It relies on fast beam-scanning along a line with alternating laser illumination that yields, for each pixel, the apparent diffusion coefficients for two different observation spot sizes (conventional confocal and super-resolved STED). We demonstrate the potential of the LIESS-FCS approach with simulations and experiments on lipid diffusion in model and live cell plasma membranes. We also apply LIESS-FCS to investigate the spatiotemporal organization of glycosylphosphatidylinositol-anchored proteins in the plasma membrane of live cells, which, interestingly, show multiple diffusion modes at different spatial positions.
Basaki, Kinga; Alkumru, Hasan; De Souza, Grace; Finer, Yoav
To assess the three-dimensional (3D) accuracy and clinical acceptability of implant definitive casts fabricated using a digital impression approach and to compare the results with those of a conventional impression method in a partially edentulous condition. A mandibular reference model was fabricated with implants in the first premolar and molar positions to simulate a patient with bilateral posterior edentulism. Ten implant-level impressions per method were made using either an intraoral scanner with scanning abutments for the digital approach or an open-tray technique and polyvinylsiloxane material for the conventional approach. 3D analysis and comparison of implant location on resultant definitive casts were performed using laser scanner and quality control software. The inter-implant distances and interimplant angulations for each implant pair were measured for the reference model and for each definitive cast (n = 20 per group); these measurements were compared to calculate the magnitude of error in 3D for each definitive cast. The influence of implant angulation on definitive cast accuracy was evaluated for both digital and conventional approaches. Statistical analysis was performed using t test (α = .05) for implant position and angulation. Clinical qualitative assessment of accuracy was done via the assessment of the passivity of a master verification stent for each implant pair, and significance was analyzed using chi-square test (α = .05). A 3D error of implant positioning was observed for the two impression techniques vs the reference model, with mean ± standard deviation (SD) error of 116 ± 94 μm and 56 ± 29 μm for the digital and conventional approaches, respectively (P = .01). In contrast, the inter-implant angulation errors were not significantly different between the two techniques (P = .83). Implant angulation did not have a significant influence on definitive cast accuracy within either technique (P = .64). The verification stent demonstrated acceptable passive fit for 11 out of 20 casts and 18 out of 20 casts for the digital and conventional methods, respectively (P = .01). Definitive casts fabricated using the digital impression approach were less accurate than those fabricated from the conventional impression approach for this simulated clinical scenario. A significant number of definitive casts generated by the digital technique did not meet clinically acceptable accuracy for the fabrication of a multiple implant-supported restoration.
NASA Astrophysics Data System (ADS)
Kageshima, Masami; Takeda, Seiji; Ptak, Arkadiusz; Nakamura, Chikashi; Jarvis, Suzanne P.; Tokumoto, Hiroshi; Miyake, Jun
2004-12-01
A method for measuring intramolecular energy dissipation as well as stiffness variation in a single biomolecule in situ by atomic force microscopy (AFM) is presented. An AFM cantilever is magnetically modulated at an off-resonance frequency while it elongates a single peptide molecule in buffer solution. The molecular stiffness and the energy dissipation are measured via the amplitude and phase lag in the response signal. Data showing a peculiar feature in both profiles of stiffness and dissipation is presented. This suggests that the present method is more sensitive to the state of the molecule than the conventional force-elongation measurement is.
Gene-expression programming for flip-bucket spillway scour.
Guven, Aytac; Azamathulla, H Md
2012-01-01
During the last two decades, researchers have noticed that the use of soft computing techniques as an alternative to conventional statistical methods based on controlled laboratory or field data, gave significantly better results. Gene-expression programming (GEP), which is an extension to genetic programming (GP), has nowadays attracted the attention of researchers in prediction of hydraulic data. This study presents GEP as an alternative tool in the prediction of scour downstream of a flip-bucket spillway. Actual field measurements were used to develop GEP models. The proposed GEP models are compared with the earlier conventional GP results of others (Azamathulla et al. 2008b; RMSE = 2.347, δ = 0.377, R = 0.842) and those of commonly used regression-based formulae. The predictions of GEP models were observed to be in strictly good agreement with measured ones, and quite a bit better than conventional GP and the regression-based formulae. The results are tabulated in terms of statistical error measures (GEP1; RMSE = 1.596, δ = 0.109, R = 0.917) and illustrated via scatter plots.
New method of 2-dimensional metrology using mask contouring
NASA Astrophysics Data System (ADS)
Matsuoka, Ryoichi; Yamagata, Yoshikazu; Sugiyama, Akiyuki; Toyoda, Yasutaka
2008-10-01
We have developed a new method of accurately profiling and measuring of a mask shape by utilizing a Mask CD-SEM. The method is intended to realize high accuracy, stability and reproducibility of the Mask CD-SEM adopting an edge detection algorithm as the key technology used in CD-SEM for high accuracy CD measurement. In comparison with a conventional image processing method for contour profiling, this edge detection method is possible to create the profiles with much higher accuracy which is comparable with CD-SEM for semiconductor device CD measurement. This method realizes two-dimensional metrology for refined pattern that had been difficult to measure conventionally by utilizing high precision contour profile. In this report, we will introduce the algorithm in general, the experimental results and the application in practice. As shrinkage of design rule for semiconductor device has further advanced, an aggressive OPC (Optical Proximity Correction) is indispensable in RET (Resolution Enhancement Technology). From the view point of DFM (Design for Manufacturability), a dramatic increase of data processing cost for advanced MDP (Mask Data Preparation) for instance and surge of mask making cost have become a big concern to the device manufacturers. This is to say, demands for quality is becoming strenuous because of enormous quantity of data growth with increasing of refined pattern on photo mask manufacture. In the result, massive amount of simulated error occurs on mask inspection that causes lengthening of mask production and inspection period, cost increasing, and long delivery time. In a sense, it is a trade-off between the high accuracy RET and the mask production cost, while it gives a significant impact on the semiconductor market centered around the mask business. To cope with the problem, we propose the best method of a DFM solution using two-dimensional metrology for refined pattern.
Jeon, Young-Chan; Jeong, Chang-Mo
2017-01-01
PURPOSE The purpose of this study was to compare the fit of cast gold crowns fabricated from the conventional and the digital impression technique. MATERIALS AND METHODS Artificial tooth in a master model and abutment teeth in ten patients were restored with cast gold crowns fabricated from the digital and the conventional impression technique. The forty silicone replicas were cut in three sections; each section was evaluated in nine points. The measurement was carried out by using a measuring microscope and I-Soultion. Data from the silicone replica were analyzed and all tests were performed with α-level of 0.05. RESULTS 1. The average gaps of cast gold crowns fabricated from the digital impression technique were larger than those of the conventional impression technique significantly. 2. In marginal and internal axial gap of cast gold crowns, no statistical differences were found between the two impression techniques. 3. The internal occlusal gaps of cast gold crowns fabricated from the digital impression technique were larger than those of the conventional impression technique significantly. CONCLUSION Both prostheses presented clinically acceptable results with comparing the fit. The prostheses fabricated from the digital impression technique showed more gaps, in respect of occlusal surface. PMID:28243386
Chung, Tae Nyoung; Kim, Sun Wook; You, Je Sung; Chung, Hyun Soo
2016-01-01
Objective Tube thoracostomy (TT) is a commonly performed intensive care procedure. Simulator training may be a good alternative method for TT training, compared with conventional methods such as apprenticeship and animal skills laboratory. However, there is insufficient evidence supporting use of a simulator. The aim of this study is to determine whether training with medical simulator is associated with faster TT process, compared to conventional training without simulator. Methods This is a simulation study. Eligible participants were emergency medicine residents with very few (≤3 times) TT experience. Participants were randomized to two groups: the conventional training group, and the simulator training group. While the simulator training group used the simulator to train TT, the conventional training group watched the instructor performing TT on a cadaver. After training, all participants performed a TT on a cadaver. The performance quality was measured as correct placement and time delay. Subjects were graded if they had difficulty on process. Results Estimated median procedure time was 228 seconds in the conventional training group and 75 seconds in the simulator training group, with statistical significance (P=0.040). The difficulty grading did not show any significant difference among groups (overall performance scale, 2 vs. 3; P=0.094). Conclusion Tube thoracostomy training with a medical simulator, when compared to no simulator training, is associated with a significantly faster procedure, when performed on a human cadaver. PMID:27752610
NASA Astrophysics Data System (ADS)
Böing, F.; Murmann, A.; Pellinger, C.; Bruckmeier, A.; Kern, T.; Mongin, T.
2018-02-01
The expansion of capacities in the German transmission grid is a necessity for further integration of renewable energy sources into the electricity sector. In this paper, the grid optimisation measures ‘Overhead Line Monitoring’, ‘Power-to-Heat’ and ‘Demand Response in the Industry’ are evaluated and compared against conventional grid expansion for the year 2030. Initially, the methodical approach of the simulation model is presented and detailed descriptions of the grid model and the used grid data, which partly originates from open-source platforms, are provided. Further, this paper explains how ‘Curtailment’ and ‘Redispatch’ can be reduced by implementing grid optimisation measures and how the depreciation of economic costs can be determined considering construction costs. The developed simulations show that the conventional grid expansion is more efficient and implies more grid relieving effects than the evaluated grid optimisation measures.
NASA Technical Reports Server (NTRS)
Talpe, Matthieu J.; Nerem, R. Steven; Forootan, Ehsan; Schmidt, Michael; Lemoine, Frank G.; Enderlin, Ellyn M.; Landerer, Felix W.
2017-01-01
We construct long-term time series of Greenland and Antarctic ice sheet mass change from satellite gravity measurements. A statistical reconstruction approach is developed based on a principal component analysis (PCA) to combine high-resolution spatial modes from the Gravity Recovery and Climate Experiment (GRACE) mission with the gravity information from conventional satellite tracking data. Uncertainties of this reconstruction are rigorously assessed; they include temporal limitations for short GRACE measurements, spatial limitations for the low-resolution conventional tracking data measurements, and limitations of the estimated statistical relationships between low- and high-degree potential coefficients reflected in the PCA modes. Trends of mass variations in Greenland and Antarctica are assessed against a number of previous studies. The resulting time series for Greenland show a higher rate of mass loss than other methods before 2000, while the Antarctic ice sheet appears heavily influenced by interannual variations.
Comparison of Methodologies of Activation Barrier Measurements for Reactions with Deactivation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xie, Zhenhua; Yan, Binhang; Zhang, Li
In this work, methodologies of activation barrier measurements for reactions with deactivation were theoretically analyzed. Reforming of ethane with CO 2 was introduced as an example for reactions with deactivation to experimentally evaluate these methodologies. Both the theoretical and experimental results showed that due to catalyst deactivation, the conventional method would inevitably lead to a much lower activation barrier, compared to the intrinsic value, even though heat and mass transport limitations were excluded. In this work, an optimal method was identified in order to provide a reliable and efficient activation barrier measurement for reactions with deactivation.
Comparison of Methodologies of Activation Barrier Measurements for Reactions with Deactivation
Xie, Zhenhua; Yan, Binhang; Zhang, Li; ...
2017-01-25
In this work, methodologies of activation barrier measurements for reactions with deactivation were theoretically analyzed. Reforming of ethane with CO 2 was introduced as an example for reactions with deactivation to experimentally evaluate these methodologies. Both the theoretical and experimental results showed that due to catalyst deactivation, the conventional method would inevitably lead to a much lower activation barrier, compared to the intrinsic value, even though heat and mass transport limitations were excluded. In this work, an optimal method was identified in order to provide a reliable and efficient activation barrier measurement for reactions with deactivation.
Li, Xiangrui; Lu, Zhong-Lin
2012-02-29
Display systems based on conventional computer graphics cards are capable of generating images with 8-bit gray level resolution. However, most experiments in vision research require displays with more than 12 bits of luminance resolution. Several solutions are available. Bit++ (1) and DataPixx (2) use the Digital Visual Interface (DVI) output from graphics cards and high resolution (14 or 16-bit) digital-to-analog converters to drive analog display devices. The VideoSwitcher (3) described here combines analog video signals from the red and blue channels of graphics cards with different weights using a passive resister network (4) and an active circuit to deliver identical video signals to the three channels of color monitors. The method provides an inexpensive way to enable high-resolution monochromatic displays using conventional graphics cards and analog monitors. It can also provide trigger signals that can be used to mark stimulus onsets, making it easy to synchronize visual displays with physiological recordings or response time measurements. Although computer keyboards and mice are frequently used in measuring response times (RT), the accuracy of these measurements is quite low. The RTbox is a specialized hardware and software solution for accurate RT measurements. Connected to the host computer through a USB connection, the driver of the RTbox is compatible with all conventional operating systems. It uses a microprocessor and high-resolution clock to record the identities and timing of button events, which are buffered until the host computer retrieves them. The recorded button events are not affected by potential timing uncertainties or biases associated with data transmission and processing in the host computer. The asynchronous storage greatly simplifies the design of user programs. Several methods are available to synchronize the clocks of the RTbox and the host computer. The RTbox can also receive external triggers and be used to measure RT with respect to external events. Both VideoSwitcher and RTbox are available for users to purchase. The relevant information and many demonstration programs can be found at http://lobes.usc.edu/.
Chauvenet, B; Bobin, C; Bouchard, J
2017-12-01
Dead-time correction formulae are established in the general case of superimposed non-homogeneous Poisson processes. Based on the same principles as conventional live-timed counting, this method exploits the additional information made available using digital signal processing systems, and especially the possibility to store the time stamps of live-time intervals. No approximation needs to be made to obtain those formulae. Estimates of the variances of corrected rates are also presented. This method is applied to the activity measurement of short-lived radionuclides. Copyright © 2017 Elsevier Ltd. All rights reserved.
Metal ion transport quantified by ICP-MS in intact cells
Figueroa, Julio A. Landero; Stiner, Cory A.; Radzyukevich, Tatiana L.; Heiny, Judith A.
2016-01-01
The use of ICP-MS to measure metal ion content in biological tissues offers a highly sensitive means to study metal-dependent physiological processes. Here we describe the application of ICP-MS to measure membrane transport of Rb and K ions by the Na,K-ATPase in mouse skeletal muscles and human red blood cells. The ICP-MS method provides greater precision and statistical power than possible with conventional tracer flux methods. The method is widely applicable to studies of other metal ion transporters and metal-dependent processes in a range of cell types and conditions. PMID:26838181
Metal ion transport quantified by ICP-MS in intact cells.
Figueroa, Julio A Landero; Stiner, Cory A; Radzyukevich, Tatiana L; Heiny, Judith A
2016-02-03
The use of ICP-MS to measure metal ion content in biological tissues offers a highly sensitive means to study metal-dependent physiological processes. Here we describe the application of ICP-MS to measure membrane transport of Rb and K ions by the Na,K-ATPase in mouse skeletal muscles and human red blood cells. The ICP-MS method provides greater precision and statistical power than possible with conventional tracer flux methods. The method is widely applicable to studies of other metal ion transporters and metal-dependent processes in a range of cell types and conditions.
EXTRACTION OF ORGANIC CONTAMINANTS FROM MARINE SEDIMENTS AND TISSUES USING MICROWAVE ENERGY
In this study, we compared microwave solvent extraction (MSE) to conventional methods for extracting organic contaminants from marine sediments and tissues with high and varying moisture content. The organic contaminants measured were polychlorinated biphenyl (PCB) congeners, chl...
ERIC Educational Resources Information Center
Bockris, J. O'M.
1983-01-01
Suggests various methods for teaching the double layer in electrochemistry courses. Topics addressed include measuring change in absolute potential difference (PD) at interphase, conventional electrode potential scale, analyzing absolute PD, metal-metal and overlap electron PDs, accumulation of material at interphase, thermodynamics of electrified…
A new way of measuring wiggling pattern in SADP for 3D NAND technology
NASA Astrophysics Data System (ADS)
Mi, Jian; Chen, Ziqi; Tu, Li Ming; Mao, Xiaoming; Liu, Gong Cai; Kawada, Hiroki
2018-03-01
A new metrology method of quantitatively measuring wiggling patterns in a Self-Aligned Double Patterning (SADP) process for 2D NAND technology has been developed with a CD-SEM metrology program on images from a Review-SEM system. The metrology program provided accurate modeling of various wiggling patterns. The Review-SEM system provided a-few-micrometer-wide Field of View (FOV), which exceeds precision-guaranteed FOV of a conventional CD-SEM. The result has been effectively verified by visual inspection on vertically compressed images compared with Wiggling Index from this new method. A best-known method (BKM) system has been developed with connected HW and SW to automatically measure wiggling patterns.
Piippo-Huotari, Oili; Norrman, Eva; Anderzén-Carlsson, Agneta; Geijer, Håkan
2018-05-01
The radiation dose for patients can be reduced with many methods and one way is to use abdominal compression. In this study, the radiation dose and image quality for a new patient-controlled compression device were compared with conventional compression and compression in the prone position . To compare radiation dose and image quality of patient-controlled compression compared with conventional and prone compression in general radiography. An experimental design with quantitative approach. After obtaining the approval of the ethics committee, a consecutive sample of 48 patients was examined with the standard clinical urography protocol. The radiation doses were measured as dose-area product and analyzed with a paired t-test. The image quality was evaluated by visual grading analysis. Four radiologists evaluated each image individually by scoring nine criteria modified from the European quality criteria for diagnostic radiographic images. There was no significant difference in radiation dose or image quality between conventional and patient-controlled compression. Prone position resulted in both higher dose and inferior image quality. Patient-controlled compression gave similar dose levels as conventional compression and lower than prone compression. Image quality was similar with both patient-controlled and conventional compression and was judged to be better than in the prone position.
Electronic cigarette substitution in the experimental tobacco marketplace: A review.
Bickel, Warren K; Pope, Derek A; Kaplan, Brent A; Brady DeHart, W; Koffarnus, Mikhail N; Stein, Jeffrey S
2018-04-24
The evolution of science derives, in part, from the development and use of new methods and techniques. Here, we discuss one development that may have impact on the understanding of tobacco regulatory science: namely, the application of behavioral economics to the complex tobacco marketplace. The purpose of this paper is to review studies that examine conditions impacting the degree to which electronic nicotine delivery system (ENDS) products substitute for conventional cigarettes in the Experimental Tobacco Marketplace (ETM). Collectively, the following factors constitute the current experimental understanding of conditions that will affect ENDS use and substitution for conventional cigarettes: increasing the base price of conventional cigarettes, increasing taxation of conventional cigarettes, subsidizing the price of ENDS products, increasing ENDS nicotine strength, and providing narratives that illustrate the potential health benefits of ENDS consumption in lieu of conventional cigarettes. Each of these factors are likely moderated by consumer characteristics, which include prior ENDS use, ENDS use risk perception, and gender. Overall, the ETM provides a unique method to explore and identify the conditions by which various nicotine products may interact with one another that mimics the real world. In addition, the ETM permits the efficacy of a broad range of potential nicotine policies and regulations to be measured prior to governmental implementation. Copyright © 2017. Published by Elsevier Inc.
Gender-Specific Correlates of Complementary and Alternative Medicine Use for Knee Osteoarthritis
Yang, Shibing; Eaton, Charles B.; McAlindon, Timothy; Lapane, Kate L.
2012-01-01
Abstract Background Knee osteoarthritis (OA) increases healthcare use and cost. Women have higher pain and lower quality of life measures compared to men even after accounting for differences in age, body mass index (BMI), and radiographic OA severity. Our objective was to describe gender-specific correlates of complementary and alternative medicine (CAM) use among persons with radiographically confirmed knee OA. Methods Using data from the Osteoarthritis Initiative, 2,679 women and men with radiographic tibiofemoral OA in at least one knee were identified. Treatment approaches were classified as current CAM therapy (alternative medical systems, mind-body interventions, manipulation and body-based methods, energy therapies, and three types of biologically based therapies) or conventional medication use (over-the-counter or prescription). Gender-specific multivariable logistic regression models identified sociodemographic and clinical/functional correlates of CAM use. Results CAM use, either alone (23.9% women, 21.9% men) or with conventional medications (27.3% women, 19.0% men), was common. Glucosamine use (27.2% women, 28.2% men) and chondroitin sulfate use (24.8% women; 25.7% men) did not differ by gender. Compared to men, women were more likely to report use of mind-body interventions (14.1% vs. 5.7%), topical agents (16.1% vs. 9.5%), and concurrent CAM strategies (18.0% vs. 9.9%). Higher quality of life measures and physical function indices in women were inversely associated with any therapy, and higher pain scores were positively associated with conventional medication use. History of hip replacement was a strong correlate of conventional medication use in women but not in men. Conclusions Women were more likely than men to use CAM alone or concomitantly with conventional medications. PMID:22946630
Advanced radiochromic film methodologies for quantitative dosimetry of small and nonstandard fields
NASA Astrophysics Data System (ADS)
Rosen, Benjamin S.
Radiotherapy treatments with small and nonstandard fields are increasing in use as collimation and targeting become more advanced, which spare normal tissues while increasing tumor dose. However, dosimetry of small and nonstandard fields is more difficult than that of conventional fields due to loss of lateral charged-particle equilibrium, tight measurement setup requirements, source occlusion, and the volume-averaging effect of conventional dosimeters. This work aims to create new small and nonstandard field dosimetry protocols using radiochromic film (RCF) in conjunction with novel readout and analysis methodologies. It also is the intent of this work to develop an improved understanding of RCF structure and mechanics for its quantitative use in general applications. Conventional digitization techniques employ white-light, flatbed document scanners or scanning-laser densitometers which are not optimized for RCF dosimetry. A point-by-point precision laser densitometry system (LDS) was developed for this work to overcome the film-scanning artifacts associated with the use of conventional digitizers, such as positional scan dependence, off-axis light scatter, glass bed interference, and low signal-to-noise ratios. The LDS was shown to be optically traceable to national standards and to provide highly reproducible density measurements. Use of the LDS resulted in increased agreement between RCF dose measurements and the single-hit detector model of film response, facilitating traceable RCF calibrations based on calibrated physical quantities. GafchromicRTM EBT3 energy response to a variety of reference x-ray and gamma-ray beam qualities was also investigated. Conventional Monte Carlo methods are not capable of predicting film intrinsic energy response to arbitrary particle spectra. Therefore, a microdosimetric model was developed to simulate the underlying physics of the radiochromic mechanism and was shown to correctly predict the intrinsic response relative to a reference beam quality. These scanning and analysis methodologies form a reliable system for accurate, high-resolution dosimetry. Output factors of 6MV linear accelerator small fields were measured using the LDS-EBT3 system and were in agreement with Monte Carlo-simulated results. Additionally, measured and simulated relative dose profiles were in agreement, even in build-up regions, in out-of-field locations, and at deep depths. Together, this work presents reliable methods for dose verification in a variety of challenging dosimetric situations.
Quartz Crystal Microbalance Electronic Interfacing Systems: A Review.
Alassi, Abdulrahman; Benammar, Mohieddine; Brett, Dan
2017-12-05
Quartz Crystal Microbalance (QCM) sensors are actively being implemented in various fields due to their compatibility with different operating conditions in gaseous/liquid mediums for a wide range of measurements. This trend has been matched by the parallel advancement in tailored electronic interfacing systems for QCM sensors. That is, selecting the appropriate electronic circuit is vital for accurate sensor measurements. Many techniques were developed over time to cover the expanding measurement requirements (e.g., accommodating highly-damping environments). This paper presents a comprehensive review of the various existing QCM electronic interfacing systems. Namely, impedance-based analysis, oscillators (conventional and lock-in based techniques), exponential decay methods and the emerging phase-mass based characterization. The aforementioned methods are discussed in detail and qualitatively compared in terms of their performance for various applications. In addition, some theoretical improvements and recommendations are introduced for adequate systems implementation. Finally, specific design considerations of high-temperature microbalance systems (e.g., GaPO₄ crystals (GCM) and Langasite crystals (LCM)) are introduced, while assessing their overall system performance, stability and quality compared to conventional low-temperature applications.
van Gemert-Schriks, M C M
2007-05-01
Although Atraumatic Restorative Treatment (ART) claims to be a patient-friendly method of treatment, little scientific proof of this is available. The aim of this study, therefore, was to acquire a reliable measurement of the degree of discomfort which children experience during dental treatment performed according to the ART approach and during the conventional method. A number of 403 Indonesian schoolchildren were randomly divided into 2 groups. In each child, one class II restoration was carried out on a deciduous molar either by means of ART or the use of rotary instruments (750 rpm). Discomfort scores were determined both by physiological measurements (heart rate) and behavioral observations (Venham scale). Venham scores showed a marked difference between the 2 groups, whereas heart rate scores only differed significantly during deep excavation. A correlation was found between Venham scores and heart rate measurements. Sex, initial anxiety and performing dentist were shown to be confounding variables. In conclusion it can be said that children treated according to the ART approach experience less discomfort than those treated with rotary instruments.
Quartz Crystal Microbalance Electronic Interfacing Systems: A Review
Benammar, Mohieddine; Brett, Dan
2017-01-01
Quartz Crystal Microbalance (QCM) sensors are actively being implemented in various fields due to their compatibility with different operating conditions in gaseous/liquid mediums for a wide range of measurements. This trend has been matched by the parallel advancement in tailored electronic interfacing systems for QCM sensors. That is, selecting the appropriate electronic circuit is vital for accurate sensor measurements. Many techniques were developed over time to cover the expanding measurement requirements (e.g., accommodating highly-damping environments). This paper presents a comprehensive review of the various existing QCM electronic interfacing systems. Namely, impedance-based analysis, oscillators (conventional and lock-in based techniques), exponential decay methods and the emerging phase-mass based characterization. The aforementioned methods are discussed in detail and qualitatively compared in terms of their performance for various applications. In addition, some theoretical improvements and recommendations are introduced for adequate systems implementation. Finally, specific design considerations of high-temperature microbalance systems (e.g., GaPO4 crystals (GCM) and Langasite crystals (LCM)) are introduced, while assessing their overall system performance, stability and quality compared to conventional low-temperature applications. PMID:29206212
NASA Astrophysics Data System (ADS)
Matsumoto, Nobuhiro; Watanabe, Takuro; Maruyama, Masaaki; Horimoto, Yoshiyuki; Maeda, Tsuneaki; Kato, Kenji
2004-06-01
The gravimetric method is the most popular method for preparing reference gas mixtures with high accuracy. We have designed and manufactured novel mass measurement equipment for gravimetric preparation of reference gas mixtures. This equipment consists of an electronic mass-comparator with a maximum capacity of 15 kg and readability of 1 mg and an automatic cylinder exchanger. The structure of this equipment is simpler and the cost is much lower than a conventional mechanical knife-edge type large balance used for gravimetric preparation of primary gas mixtures in Japan. This cylinder exchanger can mount two cylinders alternatively on the weighing pan of the comparator. In this study, the performance of the equipment has been evaluated. At first, the linearity and repeatability of the mass measurement were evaluated using standard mass pieces. Then, binary gas mixtures of propane and nitrogen were prepared and compared with those prepared with the conventional knife-edge type balance. The comparison resulted in good consistency at the compatibility criterion described in ISO6143:2001.
Measurement of pattern roughness and local size variation using CD-SEM: current status
NASA Astrophysics Data System (ADS)
Fukuda, Hiroshi; Kawasaki, Takahiro; Kawada, Hiroki; Sakai, Kei; Kato, Takashi; Yamaguchi, Satoru; Ikota, Masami; Momonoi, Yoshinori
2018-03-01
Measurement of line edge roughness (LER) is discussed from four aspects: edge detection, PSD prediction, sampling strategy, and noise mitigation, and general guidelines and practical solutions for LER measurement today are introduced. Advanced edge detection algorithms such as wave-matching method are shown effective for robustly detecting edges from low SNR images, while conventional algorithm with weak filtering is still effective in suppressing SEM noise and aliasing. Advanced PSD prediction method such as multi-taper method is effective in suppressing sampling noise within a line edge to analyze, while number of lines is still required for suppressing line to line variation. Two types of SEM noise mitigation methods, "apparent noise floor" subtraction method and LER-noise decomposition using regression analysis are verified to successfully mitigate SEM noise from PSD curves. These results are extended to LCDU measurement to clarify the impact of SEM noise and sampling noise on LCDU.
A new method and device of aligning patient setup lasers in radiation therapy.
Hwang, Ui-Jung; Jo, Kwanghyun; Lim, Young Kyung; Kwak, Jung Won; Choi, Sang Hyuon; Jeong, Chiyoung; Kim, Mi Young; Jeong, Jong Hwi; Shin, Dongho; Lee, Se Byeong; Park, Jeong-Hoon; Park, Sung Yong; Kim, Siyong
2016-01-08
The aim of this study is to develop a new method to align the patient setup lasers in a radiation therapy treatment room and examine its validity and efficiency. The new laser alignment method is realized by a device composed of both a metallic base plate and a few acrylic transparent plates. Except one, every plate has either a crosshair line (CHL) or a single vertical line that is used for alignment. Two holders for radiochromic film insertion are prepared in the device to find a radiation isocenter. The right laser positions can be found optically by matching the shadows of all the CHLs in the gantry head and the device. The reproducibility, accuracy, and efficiency of laser alignment and the dependency on the position error of the light source were evaluated by comparing the means and the standard deviations of the measured laser positions. After the optical alignment of the lasers, the radiation isocenter was found by the gantry and collimator star shots, and then the lasers were translated parallel to the isocenter. In the laser position reproducibility test, the mean and standard deviation on the wall of treatment room were 32.3 ± 0.93 mm for the new method whereas they were 33.4 ± 1.49 mm for the conventional method. The mean alignment accuracy was 1.4 mm for the new method, and 2.1 mm for the conventional method on the walls. In the test of the dependency on the light source position error, the mean laser position was shifted just by a similar amount of the shift of the light source in the new method, but it was greatly magnified in the conventional method. In this study, a new laser alignment method was devised and evaluated successfully. The new method provided more accurate, more reproducible, and faster alignment of the lasers than the conventional method.
Comparison of Minimally and More Invasive Methods of Determining Mixed Venous Oxygen Saturation.
Smit, Marli; Levin, Andrew I; Coetzee, Johan F
2016-04-01
To investigate the accuracy of a minimally invasive, 2-step, lookup method for determining mixed venous oxygen saturation compared with conventional techniques. Single-center, prospective, nonrandomized, pilot study. Tertiary care hospital, university setting. Thirteen elective cardiac and vascular surgery patients. All participants received intra-arterial and pulmonary artery catheters. Minimally invasive oxygen consumption and cardiac output were measured using a metabolic module and lithium-calibrated arterial waveform analysis (LiDCO; LiDCO, London), respectively. For the minimally invasive method, Step 1 involved these minimally invasive measurements, and arterial oxygen content was entered into the Fick equation to calculate mixed venous oxygen content. Step 2 used an oxyhemoglobin curve spreadsheet to look up mixed venous oxygen saturation from the calculated mixed venous oxygen content. The conventional "invasive" technique used pulmonary artery intermittent thermodilution cardiac output, direct sampling of mixed venous and arterial blood, and the "reverse-Fick" method of calculating oxygen consumption. LiDCO overestimated thermodilution cardiac output by 26%. Pulmonary artery catheter-derived oxygen consumption underestimated metabolic module measurements by 27%. Mixed venous oxygen saturation differed between techniques; the calculated values underestimated the direct measurements by between 12% to 26.3%, this difference being statistically significant. The magnitude of the differences between the minimally invasive and invasive techniques was too great for the former to act as a surrogate of the latter and could adversely affect clinical decision making. Copyright © 2016 Elsevier Inc. All rights reserved.
Liu, Tao; Thibos, Larry; Marin, Gildas; Hernandez, Martha
2014-01-01
Conventional aberration analysis by a Shack-Hartmann aberrometer is based on the implicit assumption that an injected probe beam reflects from a single fundus layer. In fact, the biological fundus is a thick reflector and therefore conventional analysis may produce errors of unknown magnitude. We developed a novel computational method to investigate this potential failure of conventional analysis. The Shack-Hartmann wavefront sensor was simulated by computer software and used to recover by two methods the known wavefront aberrations expected from a population of normally-aberrated human eyes and bi-layer fundus reflection. The conventional method determines the centroid of each spot in the SH data image, from which wavefront slopes are computed for least-squares fitting with derivatives of Zernike polynomials. The novel 'global' method iteratively adjusted the aberration coefficients derived from conventional centroid analysis until the SH image, when treated as a unitary picture, optimally matched the original data image. Both methods recovered higher order aberrations accurately and precisely, but only the global algorithm correctly recovered the defocus coefficients associated with each layer of fundus reflection. The global algorithm accurately recovered Zernike coefficients for mean defocus and bi-layer separation with maximum error <0.1%. The global algorithm was robust for bi-layer separation up to 2 dioptres for a typical SH wavefront sensor design. For 100 randomly generated test wavefronts with 0.7 D axial separation, the retrieved mean axial separation was 0.70 D with standard deviations (S.D.) of 0.002 D. Sufficient information is contained in SH data images to measure the dioptric thickness of dual-layer fundus reflection. The global algorithm is superior since it successfully recovered the focus value associated with both fundus layers even when their separation was too small to produce clearly separated spots, while the conventional analysis misrepresents the defocus component of the wavefront aberration as the mean defocus for the two reflectors. Our novel global algorithm is a promising method for SH data image analysis in clinical and visual optics research for human and animal eyes. © 2013 The Authors Ophthalmic & Physiological Optics © 2013 The College of Optometrists.
NASA Astrophysics Data System (ADS)
Zhou, Wei-Xing; Sornette, Didier
2007-07-01
We have recently introduced the “thermal optimal path” (TOP) method to investigate the real-time lead-lag structure between two time series. The TOP method consists in searching for a robust noise-averaged optimal path of the distance matrix along which the two time series have the greatest similarity. Here, we generalize the TOP method by introducing a more general definition of distance which takes into account possible regime shifts between positive and negative correlations. This generalization to track possible changes of correlation signs is able to identify possible transitions from one convention (or consensus) to another. Numerical simulations on synthetic time series verify that the new TOP method performs as expected even in the presence of substantial noise. We then apply it to investigate changes of convention in the dependence structure between the historical volatilities of the USA inflation rate and economic growth rate. Several measures show that the new TOP method significantly outperforms standard cross-correlation methods.
NASA Astrophysics Data System (ADS)
Casaccia, S.; Sirevaag, E. J.; Richter, E. J.; O'Sullivan, J. A.; Scalise, L.; Rohrbaugh, J. W.
2016-10-01
This report amplifies and extends prior descriptions of the use of laser Doppler vibrometry (LDV) as a method for assessing cardiovascular activity, on a non-contact basis. A rebreathing task (n = 35 healthy individuals) was used to elicit multiple effects associated with changes in autonomic drive as well as blood gases including hypercapnia. The LDV pulse was obtained from two sites overlying the carotid artery, separated by 40 mm. A robust pulse signal was obtained from both sites, in accord with the well-described changes in carotid diameter over the blood pressure cycle. Emphasis was placed on extracting timing measures from the LDV pulse, which could serve as surrogate measures of pulse wave velocity (PWV) and the associated arterial stiffness. For validation purposes, a standard measure of pulse transit time (PTT) to the radial artery was obtained using a tonometric sensor. Two key measures of timing were extracted from the LDV pulse. One involved the transit time along the 40 mm distance separating the two LDV measurement sites. A second measure involved the timing of a late feature of the LDV pulse contour, which was interpreted as reflection wave latency and thus a measure of round-trip travel time. Both LDV measures agreed with the conventional PTT measure, in disclosing increased PWV during periods of active rebreathing. These results thus provide additional evidence that measures based on the non-contact LDV technique might provide surrogate measures for those obtained using conventional, more obtrusive assessment methods that require attached sensors.
Complex regression Doppler optical coherence tomography
NASA Astrophysics Data System (ADS)
Elahi, Sahar; Gu, Shi; Thrane, Lars; Rollins, Andrew M.; Jenkins, Michael W.
2018-04-01
We introduce a new method to measure Doppler shifts more accurately and extend the dynamic range of Doppler optical coherence tomography (OCT). The two-point estimate of the conventional Doppler method is replaced with a regression that is applied to high-density B-scans in polar coordinates. We built a high-speed OCT system using a 1.68-MHz Fourier domain mode locked laser to acquire high-density B-scans (16,000 A-lines) at high enough frame rates (˜100 fps) to accurately capture the dynamics of the beating embryonic heart. Flow phantom experiments confirm that the complex regression lowers the minimum detectable velocity from 12.25 mm / s to 374 μm / s, whereas the maximum velocity of 400 mm / s is measured without phase wrapping. Complex regression Doppler OCT also demonstrates higher accuracy and precision compared with the conventional method, particularly when signal-to-noise ratio is low. The extended dynamic range allows monitoring of blood flow over several stages of development in embryos without adjusting the imaging parameters. In addition, applying complex averaging recovers hidden features in structural images.
Precision depth measurement of through silicon vias (TSVs) on 3D semiconductor packaging process.
Jin, Jonghan; Kim, Jae Wan; Kang, Chu-Shik; Kim, Jong-Ahn; Lee, Sunghun
2012-02-27
We have proposed and demonstrated a novel method to measure depths of through silicon vias (TSVs) at high speed. TSVs are fine and deep holes fabricated in silicon wafers for 3D semiconductors; they are used for electrical connections between vertically stacked wafers. Because the high-aspect ratio hole of the TSV makes it difficult for light to reach the bottom surface, conventional optical methods using visible lights cannot determine the depth value. By adopting an optical comb of a femtosecond pulse laser in the infra-red range as a light source, the depths of TSVs having aspect ratio of about 7 were measured. This measurement was done at high speed based on spectral resolved interferometry. The proposed method is expected to be an alternative method for depth inspection of TSVs.
Predicting S-wave velocities for unconsolidated sediments at low effective pressure
Lee, Myung W.
2010-01-01
Accurate S-wave velocities for shallow sediments are important in performing a reliable elastic inversion for gas hydrate-bearing sediments and in evaluating velocity models for predicting S-wave velocities, but few S-wave velocities are measured at low effective pressure. Predicting S-wave velocities by using conventional methods based on the Biot-Gassmann theory appears to be inaccurate for laboratory-measured velocities at effective pressures less than about 4-5 megapascals (MPa). Measured laboratory and well log velocities show two distinct trends for S-wave velocities with respect to P-wave velocity: one for the S-wave velocity less than about 0.6 kilometer per second (km/s) which approximately corresponds to effective pressure of about 4-5 MPa, and the other for S-wave velocities greater than 0.6 km/s. To accurately predict S-wave velocities at low effective pressure less than about 4-5 MPa, a pressure-dependent parameter that relates the consolidation parameter to shear modulus of the sediments at low effective pressure is proposed. The proposed method in predicting S-wave velocity at low effective pressure worked well for velocities of water-saturated sands measured in the laboratory. However, this method underestimates the well-log S-wave velocities measured in the Gulf of Mexico, whereas the conventional method performs well for the well log velocities. The P-wave velocity dispersion due to fluid in the pore spaces, which is more pronounced at high frequency with low effective pressures less than about 4 MPa, is probably a cause for this discrepancy.
Bunaciu, Andrei A.; Udristioiu, Gabriela Elena; Ruţă, Lavinia L.; Fleschin, Şerban; Aboul-Enein, Hassan Y.
2009-01-01
A Fourier transform infrared (FT-IR) spectrometric method was developed for the rapid, direct measurement of diosmin in different pharmaceutical drugs. Conventional KBr-spectra were compared for best determination of active substance in commercial preparations. The Beer–Lambert law and two chemometric approaches, partial least squares (PLS) and principal component regression (PCR+) methods, were tried in data processing. PMID:23960715
Concerning the Video Drift Method to Measure Double Stars
NASA Astrophysics Data System (ADS)
Nugent, Richard L.; Iverson, Ernest W.
2015-05-01
Classical methods to measure position angles and separations of double stars rely on just a few measurements either from visual observations or photographic means. Visual and photographic CCD observations are subject to errors from the following sources: misalignments from eyepiece/camera/barlow lens/micrometer/focal reducers, systematic errors from uncorrected optical distortions, aberrations from the telescope system, camera tilt, magnitude and color effects. Conventional video methods rely on calibration doubles and graphically calculating the east-west direction plus careful choice of select video frames stacked for measurement. Atmospheric motion is one of the larger sources of error in any exposure/measurement method which is on the order of 0.5-1.5. Ideally, if a data set from a short video can be used to derive position angle and separation, with each data set self-calibrating independent of any calibration doubles or star catalogues, this would provide measurements of high systematic accuracy. These aims are achieved by the video drift method first proposed by the authors in 2011. This self calibrating video method automatically analyzes 1,000's of measurements from a short video clip.
Ground-Cover Measurements: Assessing Correlation Among Aerial and Ground-Based Methods
NASA Astrophysics Data System (ADS)
Booth, D. Terrance; Cox, Samuel E.; Meikle, Tim; Zuuring, Hans R.
2008-12-01
Wyoming’s Green Mountain Common Allotment is public land providing livestock forage, wildlife habitat, and unfenced solitude, amid other ecological services. It is also the center of ongoing debate over USDI Bureau of Land Management’s (BLM) adjudication of land uses. Monitoring resource use is a BLM responsibility, but conventional monitoring is inadequate for the vast areas encompassed in this and other public-land units. New monitoring methods are needed that will reduce monitoring costs. An understanding of data-set relationships among old and new methods is also needed. This study compared two conventional methods with two remote sensing methods using images captured from two meters and 100 meters above ground level from a camera stand (a ground, image-based method) and a light airplane (an aerial, image-based method). Image analysis used SamplePoint or VegMeasure software. Aerial methods allowed for increased sampling intensity at low cost relative to the time and travel required by ground methods. Costs to acquire the aerial imagery and measure ground cover on 162 aerial samples representing 9000 ha were less than 3000. The four highest correlations among data sets for bare ground—the ground-cover characteristic yielding the highest correlations (r)—ranged from 0.76 to 0.85 and included ground with ground, ground with aerial, and aerial with aerial data-set associations. We conclude that our aerial surveys are a cost-effective monitoring method, that ground with aerial data-set correlations can be equal to, or greater than those among ground-based data sets, and that bare ground should continue to be investigated and tested for use as a key indicator of rangeland health.
Noh, Dong Koog; Lim, Jae-Young; Shin, Hyung-Ik; Paik, Nam-Jong
2008-01-01
To evaluate the effect of an aquatic therapy programme designed to increase balance in stroke survivors. A randomized, controlled pilot trial. Rehabilitation department of a university hospital. Ambulatory chronic stroke patients (n = 25):13 in an aquatic therapy group and 12 in a conventional therapy group. The aquatic therapy group participated in a programme consisting of Ai Chi and Halliwick methods, which focused on balance and weight-bearing exercises. The conventional therapy group performed gym exercises. In both groups, the interventions occurred for 1 hour, three times per week, for eight weeks. The primary outcome measures were Berg Balance Scale score and weight-bearing ability, as measured by vertical ground reaction force during four standing tasks (rising from a chair and weight-shifting forward, backward and laterally). Secondary measures were muscle strength and gait. Compared with the conventional therapy group, the aquatic therapy group attained significant improvements in Berg Balance Scale scores, forward and backward weight-bearing abilities of the affected limbs, and knee flexor strength (P < 0.05), with effect sizes of 1.03, 1.14, 0.72 and 1.13 standard deviation units and powers of 75, 81, 70 and 26%, respectively. There were no significant changes in the other measures between the two groups. Postural balance and knee flexor strength were improved after aquatic therapy based on the Halliwick and Ai Chi methods in stroke survivors. Because of limited power and a small population base, further studies with larger sample sizes are required.
Comparison of air space measurement imaged by CT, small-animal CT, and hyperpolarized Xe MRI
NASA Astrophysics Data System (ADS)
Madani, Aniseh; White, Steven; Santyr, Giles; Cunningham, Ian
2005-04-01
Lung disease is the third leading cause of death in the western world. Lung air volume measurements are thought to be early indicators of lung disease and markers in pharmaceutical research. The purpose of this work is to develop a lung phantom for assessing and comparing the quantitative accuracy of hyperpolarized xenon 129 magnetic resonance imaging (HP 129Xe MRI), conventional computed tomography (HRCT), and highresolution small-animal CT (μCT) in measuring lung gas volumes. We developed a lung phantom consisting of solid cellulose acetate spheres (1, 2, 3, 4 and 5 mm diameter) uniformly packed in circulated air or HP 129Xe gas. Air volume is estimated based on simple thresholding algorithm. Truth is calculated from the sphere diameters and validated using μCT. While this phantom is not anthropomorphic, it enables us to directly measure air space volume and compare these imaging methods as a function of sphere diameter for the first time. HP 129Xe MRI requires partial volume analysis to distinguish regions with and without 129Xe gas and results are within %5 of truth but settling of the heavy 129Xe gas complicates this analysis. Conventional CT demonstrated partial-volume artifacts for the 1mm spheres. μCT gives the most accurate air-volume results. Conventional CT and HP 129Xe MRI give similar results although non-uniform densities of 129Xe require more sophisticated algorithms than simple thresholding. The threshold required to give the true air volume in both HRCT and μCT, varies with sphere diameters calling into question the validity of thresholding method.
Field assessment of alternative bed-load transport estimators
Gaeuman, G.; Jacobson, R.B.
2007-01-01
Measurement of near-bed sediment velocities with acoustic Doppler current profilers (ADCPs) is an emerging approach for quantifying bed-load sediment fluxes in rivers. Previous investigations of the technique have relied on conventional physical bed-load sampling to provide reference transport information with which to validate the ADCP measurements. However, physical samples are subject to substantial errors, especially under field conditions in which surrogate methods are most needed. Comparisons between ADCP bed velocity measurements with bed-load transport rates estimated from bed-form migration rates in the lower Missouri River show a strong correlation between the two surrogate measures over a wide range of mild to moderately intense sediment transporting conditions. The correlation between the ADCP measurements and physical bed-load samples is comparatively poor, suggesting that physical bed-load sampling is ineffective for ground-truthing alternative techniques in large sand-bed rivers. Bed velocities measured in this study became more variable with increasing bed-form wavelength at higher shear stresses. Under these conditions, bed-form dimensions greatly exceed the region of the bed ensonified by the ADCP, and the magnitude of the acoustic measurements depends on instrument location with respect to bed-form crests and troughs. Alternative algorithms for estimating bed-load transport from paired longitudinal profiles of bed topography were evaluated. An algorithm based on the routing of local erosion and deposition volumes that eliminates the need to identify individual bed forms was found to give results similar to those of more conventional dune-tracking methods. This method is particularly useful in cases where complex bed-form morphology makes delineation of individual bed forms difficult. ?? 2007 ASCE.
Burns, Angus; Dowling, Adam H; Garvey, Thérèse M; Fleming, Garry J P
2014-10-01
To investigate the inter-examiner variability of contact point displacement measurements (used to calculate the overall Little's Irregularity Index (LII) score) from digital models of the maxillary arch by four independent examiners. Maxillary orthodontic pre-treatment study models of ten patients were scanned using the Lava(tm) Chairside Oral Scanner (LCOS) and 3D digital models were created using Creo(®) computer aided design (CAD) software. Four independent examiners measured the contact point displacements of the anterior maxillary teeth using the software. Measurements were recorded randomly on three separate occasions by the examiners and the measurements (n=600) obtained were analysed using correlation analyses and analyses of variance (ANOVA). LII contact point displacement measurements for the maxillary arch were reproducible for inter-examiner assessment when using the digital method and were highly correlated between examiner pairs for contact point displacement measurements >2mm. The digital measurement technique showed poor correlation for smaller contact point displacement measurements (<2mm) for repeated measurements. The coefficient of variation (CoV) of the digital contact point displacement measurements highlighted 348 of the 600 measurements differed by more than 20% of the mean compared with 516 of 600 for the same measurements performed using the conventional LII measurement technique. Although the inter-examiner variability of LII contact point displacement measurements on the maxillary arch was reduced using the digital compared with the conventional LII measurement methodology, neither method was considered appropriate for orthodontic research purposes particularly when measuring small contact point displacements. Copyright © 2014 Elsevier Ltd. All rights reserved.
Ghoveizi, Rahab; Alikhasi, Marzieh; Siadat, Mohammad-Reza; Siadat, Hakimeh; Sorouri, Majid
2013-01-01
Objective: Crestal bone loss is a biological complication in implant dentistry. The aim of this study was to compare the effect of progressive and conventional loading on crestal bone height and bone density around single osseointegrated implants in the posterior maxilla by a longitudinal radiographic assessment technique. Materials and Methods: Twenty micro thread implants were placed in 10 patients (two implants per patient). One of the two implants in each patient was assigned to progressive and the other to conventional loading groups. Eight weeks after surgery, conventional implants were restored with a metal ceramic crown and the progressive group underwent a progressive loading protocol. The progressive loading group took different temporary acrylic crowns at 2, 4 and 6 months. After eight months, acrylic crowns were replaced with a metal ceramic crown. Computer radiography of both progressive and conventional implants was taken at 2, 4, 6, and 12 months. Image analysis was performed to measure the height of crestal bone loss and bone density. Results: The mean values of crestal bone loss at month 12 were 0.11 (0.19) mm for progressively and 0.36 (0.36) mm for conventionally loaded implants, with a statistically significant difference (P < 0.05) using Wilcoxon sign rank. Progressively loaded group showed a trend for higher bone density gain compared to the conventionally loaded group, but when tested with repeated measure ANOVA, the differences were not statistically significant (P > 0.05). Conclusion: The progressive group showed less crestal bone loss in single osseointegrated implant than the conventional group. Bone density around progressively loaded implants showed increase in crestal, middle and apical areas. PMID:23724215
Ullattuthodi, Sujana; Cherian, Kandathil Phillip; Anandkumar, R; Nambiar, M Sreedevi
2017-01-01
This in vitro study seeks to evaluate and compare the marginal and internal fit of cobalt-chromium copings fabricated using the conventional and direct metal laser sintering (DMLS) techniques. A master model of a prepared molar tooth was made using cobalt-chromium alloy. Silicone impression of the master model was made and thirty standardized working models were then produced; twenty working models for conventional lost-wax technique and ten working models for DMLS technique. A total of twenty metal copings were fabricated using two different production techniques: conventional lost-wax method and DMLS; ten samples in each group. The conventional and DMLS copings were cemented to the working models using glass ionomer cement. Marginal gap of the copings were measured at predetermined four points. The die with the cemented copings are standardized-sectioned with a heavy duty lathe. Then, each sectioned samples were analyzed for the internal gap between the die and the metal coping using a metallurgical microscope. Digital photographs were taken at ×50 magnification and analyzed using measurement software. Statistical analysis was done by unpaired t -test and analysis of variance (ANOVA). The results of this study reveal that no significant difference was present in the marginal gap of conventional and DMLS copings ( P > 0.05) by means of ANOVA. The mean values of internal gap of DMLS copings were significantly greater than that of conventional copings ( P < 0.05). Within the limitations of this in vitro study, it was concluded that the internal fit of conventional copings was superior to that of the DMLS copings. Marginal fit of the copings fabricated by two different techniques had no significant difference.
Otsuji, Kazutaka; Sasaki, Takeshi; Tanaka, Atsushi; Kunita, Akiko; Ikemura, Masako; Matsusaka, Keisuke; Tada, Keiichiro; Fukayama, Masashi; Seto, Yasuyuki
2017-02-01
Digital polymerase chain reaction (dPCR) has been used to yield an absolute measure of nucleic acid concentrations. Recently, a new method referred to as droplet digital PCR (ddPCR) has gained attention as a more precise and less subjective assay to quantify DNA amplification. We demonstrated the usefulness of ddPCR to determine HER2 gene amplification of breast cancer. In this study, we used ddPCR to measure the HER2 gene copy number in clinical formalin-fixed paraffin-embedded samples of 41 primary breast cancer patients. To improve the accuracy of ddPCR analysis, we also estimated the tumor content ratio (TCR) for each sample. Our determination method for HER2 gene amplification using the ddPCR ratio (ERBB2:ch17cent copy number ratio) combined with the TCR showed high consistency with the conventionally defined HER2 gene status according to ASCO-CAP (American Society of Clinical Oncology/College of American Pathologists) guidelines (P<0.0001, Fisher's exact test). The equivocal area was established by adopting 99% confidence intervals obtained by cell line assays, which made it possible to identify all conventionally HER2-positive cases with our method. In addition, we succeeded in automating a major part of the process from DNA extraction to determination of HER2 gene status. The introduction of ddPCR to determine the HER2 gene status in breast cancer is feasible for use in clinical practice and might complement or even replace conventional methods of examination in the future.
ERIC Educational Resources Information Center
Heinemann, Allen W.; Shontz, Franklin C.
Conventional research strategies typically emphasize behavior-determining tendencies so strongly that the person as a whole is ignored. Research strategies for studying whole persons focus on symbolic structures, formulate specific questions in advance, study persons one at a time, use individualized measures, and regard participants as expert…
The Measurement of Term Importance in Automatic Indexing.
ERIC Educational Resources Information Center
Salton, G.; And Others
1981-01-01
Reviews major term-weighting theories, presents methods for estimating the relevance properties of terms based on their frequency characteristics in a document collection, and compares weighting systems using term relevance properties with more conventional frequency-based methodologies. Eighteen references are cited. (Author/FM)
Apparatus and method for characterizing ultrafast polarization varying optical pulses
Smirl, Arthur; Trebino, Rick P.
1999-08-10
Practical techniques are described for characterizing ultrafast potentially ultraweak, ultrashort optical pulses. The techniques are particularly suited to the measurement of signals from nonlinear optical materials characterization experiments, whose signals are generally too weak for full characterization using conventional techniques.
Faster modified protocol for first order reversal curve measurements
NASA Astrophysics Data System (ADS)
De Biasi, Emilio
2017-10-01
In this work we present a faster modified protocol for first order reversal curve (FORC) measurements. The main idea of this procedure is to use the information of the ascending and descending branches constructed through successive sweeps of magnetic field. The new method reduces the number of field sweeps to almost one half as compared to the traditional method. The length of each branch is reduced faster than in the usual FORC protocol. The new method implies not only a new measurement protocol but also a new recipe for the previous treatment of the data. After of these pre-processing, the FORC diagram can be obtained by the conventional methods. In the present work we show that the new FORC procedure leads to results identical to the conventional method if the system under study follows the Stoner-Wohlfarth model with interactions that do not depend of the magnetic state (up or down) of the entities, as in the Preisach model. More specifically, if the coercive and interactions fields are not correlated, and the hysteresis loops have a square shape. Some numerical examples show the comparison between the usual FORC procedure and the propose one. We also discuss that it is possible to find some differences in the case of real systems, due to the magnetic interactions. There is no reason to prefer one FORC method over the other from the point of view of the information to be obtained. On the contrary, the use of both methods could open doors for a more accurate and deep analysis.
Uncertainty of in-flight thrust determination
NASA Technical Reports Server (NTRS)
Abernethy, Robert B.; Adams, Gary R.; Steurer, John W.; Ascough, John C.; Baer-Riedhart, Jennifer L.; Balkcom, George H.; Biesiadny, Thomas
1986-01-01
Methods for estimating the measurement error or uncertainty of in-flight thrust determination in aircraft employing conventional turbofan/turbojet engines are reviewed. While the term 'in-flight thrust determination' is used synonymously with 'in-flight thrust measurement', in-flight thrust is not directly measured but is determined or calculated using mathematical modeling relationships between in-flight thrust and various direct measurements of physical quantities. The in-flight thrust determination process incorporates both ground testing and flight testing. The present text is divided into the following categories: measurement uncertainty methodoogy and in-flight thrust measurent processes.
Li, Yue; Zhang, Di; Capoglu, Ilker; Hujsak, Karl A; Damania, Dhwanil; Cherkezyan, Lusik; Roth, Eric; Bleher, Reiner; Wu, Jinsong S; Subramanian, Hariharan; Dravid, Vinayak P; Backman, Vadim
2017-06-01
Essentially all biological processes are highly dependent on the nanoscale architecture of the cellular components where these processes take place. Statistical measures, such as the autocorrelation function (ACF) of the three-dimensional (3D) mass-density distribution, are widely used to characterize cellular nanostructure. However, conventional methods of reconstruction of the deterministic 3D mass-density distribution, from which these statistical measures can be calculated, have been inadequate for thick biological structures, such as whole cells, due to the conflict between the need for nanoscale resolution and its inverse relationship with thickness after conventional tomographic reconstruction. To tackle the problem, we have developed a robust method to calculate the ACF of the 3D mass-density distribution without tomography. Assuming the biological mass distribution is isotropic, our method allows for accurate statistical characterization of the 3D mass-density distribution by ACF with two data sets: a single projection image by scanning transmission electron microscopy and a thickness map by atomic force microscopy. Here we present validation of the ACF reconstruction algorithm, as well as its application to calculate the statistics of the 3D distribution of mass-density in a region containing the nucleus of an entire mammalian cell. This method may provide important insights into architectural changes that accompany cellular processes.
Li, Yue; Zhang, Di; Capoglu, Ilker; Hujsak, Karl A.; Damania, Dhwanil; Cherkezyan, Lusik; Roth, Eric; Bleher, Reiner; Wu, Jinsong S.; Subramanian, Hariharan; Dravid, Vinayak P.; Backman, Vadim
2018-01-01
Essentially all biological processes are highly dependent on the nanoscale architecture of the cellular components where these processes take place. Statistical measures, such as the autocorrelation function (ACF) of the three-dimensional (3D) mass–density distribution, are widely used to characterize cellular nanostructure. However, conventional methods of reconstruction of the deterministic 3D mass–density distribution, from which these statistical measures can be calculated, have been inadequate for thick biological structures, such as whole cells, due to the conflict between the need for nanoscale resolution and its inverse relationship with thickness after conventional tomographic reconstruction. To tackle the problem, we have developed a robust method to calculate the ACF of the 3D mass–density distribution without tomography. Assuming the biological mass distribution is isotropic, our method allows for accurate statistical characterization of the 3D mass–density distribution by ACF with two data sets: a single projection image by scanning transmission electron microscopy and a thickness map by atomic force microscopy. Here we present validation of the ACF reconstruction algorithm, as well as its application to calculate the statistics of the 3D distribution of mass–density in a region containing the nucleus of an entire mammalian cell. This method may provide important insights into architectural changes that accompany cellular processes. PMID:28416035
Investigation into photostability of soybean oils by thermal lens spectroscopy
NASA Astrophysics Data System (ADS)
Savi, E. L.; Malacarne, L. C.; Baesso, M. L.; Pintro, P. T. M.; Croge, C.; Shen, J.; Astrath, N. G. C.
2015-06-01
Assessment of photochemical stability is essential for evaluating quality and the shelf life of vegetable oils, which are very important aspects of marketing and human health. Most of conventional methods used to investigate oxidative stability requires long time experimental procedures with high consumption of chemical inputs for the preparation or extraction of sample compounds. In this work we propose a time-resolved thermal lens method to analyze photostability of edible oils by quantitative measurement of photoreaction cross-section. An all-numerical routine is employed to solve a complex theoretical problem involving photochemical reaction, thermal lens effect, and mass diffusion during local laser excitation. The photostability of pure oil and oils with natural and synthetic antioxidants is investigated. The thermal lens results are compared with those obtained by conventional methods, and a complete set of physical properties of the samples is presented.
Extraction of organic contaminants from marine sediments and tissues using microwave energy.
Jayaraman, S; Pruell, R J; McKinney, R
2001-07-01
In this study, we compared microwave solvent extraction (MSE) to conventional methods for extracting organic contaminants from marine sediments and tissues with high and varying moisture content. The organic contaminants measured were polychlorinated biphenyl (PCB) congeners, chlorinated pesticides, and polycyclic aromatic hydrocarbons (PAHs). Initial experiments were conducted on dry standard reference materials (SRMs) and field collected marine sediments. Moisture content in samples greatly influenced the recovery of the analytes of interest. When wet sediments were included in a sample batch, low recoveries were often encountered in other samples in the batch, including the dry SRM. Experiments were conducted to test the effect of standardizing the moisture content in all samples in a batch prior to extraction. SRM1941a (marine sediment). SRM1974a (mussel tissue), as well as QA96SED6 (marine sediment), and QA96TIS7 (marine tissue), both from 1996 NIST Intercalibration Exercise were extracted using microwave and conventional methods. Moisture levels were adjusted in SRMs to match those of marine sediment and tissue samples before microwave extraction. The results demonstrated that it is crucial to standardize the moisture content in all samples, including dry reference material to ensure good recovery of organic contaminants. MSE yielded equivalent or superior recoveries compared to conventional methods for the majority of the compounds evaluated. The advantages of MSE over conventional methods are reduced solvent usage, higher sample throughput and the elimination of halogenated solvent usage.
NASA Technical Reports Server (NTRS)
Bever, R. S.
1976-01-01
Internal embedment stress measurements were performed, using tiny ferrite core transformers, whose voltage output was calibrated versus pressure by the manufacturer. Comparative internal strain measurements were made by attaching conventional strain gages to the same type of resistors and encapsulating these in various potting compounds. Both types of determinations were carried out while temperature cycling from 77 C to -50 C.
Bourantas, Christos V; Papafaklis, Michail I; Athanasiou, Lambros; Kalatzis, Fanis G; Naka, Katerina K; Siogkas, Panagiotis K; Takahashi, Saeko; Saito, Shigeru; Fotiadis, Dimitrios I; Feldman, Charles L; Stone, Peter H; Michalis, Lampros K
2013-09-01
To develop and validate a new methodology that allows accurate 3-dimensional (3-D) coronary artery reconstruction using standard, simple angiographic and intravascular ultrasound (IVUS) data acquired during routine catheterisation enabling reliable assessment of the endothelial shear stress (ESS) distribution. Twenty-two patients (22 arteries: 7 LAD; 7 LCx; 8 RCA) who underwent angiography and IVUS examination were included. The acquired data were used for 3-D reconstruction using a conventional method and a new methodology that utilised the luminal 3-D centreline to place the detected IVUS borders and anatomical landmarks to estimate their orientation. The local ESS distribution was assessed by computational fluid dynamics. In corresponding consecutive 3 mm segments, lumen, plaque and ESS measurements in the 3-D models derived by the centreline approach were highly correlated to those derived from the conventional method (r>0.98 for all). The centreline methodology had a 99.5% diagnostic accuracy for identifying segments exposed to low ESS and provided similar estimations to the conventional method for the association between the change in plaque burden and ESS (centreline method: slope= -1.65%/Pa, p=0.078; conventional method: slope= -1.64%/Pa, p=0.084; p =0.69 for difference between the two methodologies). The centreline methodology provides geometrically correct models and permits reliable ESS computation. The ability to utilise data acquired during routine coronary angiography and IVUS examination will facilitate clinical investigation of the role of local ESS patterns in the natural history of coronary atherosclerosis.
The influence of conservation tillage methods on soil water regimes in semi-arid southern Zimbabwe
NASA Astrophysics Data System (ADS)
Mupangwa, W.; Twomlow, S.; Walker, S.
Planting basins and ripper tillage practices are major components of the recently introduced conservation agriculture package that is being extensively promoted for smallholder farming in Zimbabwe. Besides preparing land for crop planting, these two technologies also help in collecting and using rainwater more efficiently in semi-arid areas. The basin tillage is being targeted for households with limited or no access to draught animals while ripping is meant for smallholder farmers with some draught animal power. Trials were established at four farms in Gwanda and Insiza in southern Zimbabwe to determine soil water contributions and runoff water losses from plots under four different tillage treatments. The tillage treatments were hand-dug planting basins, ripping, conventional spring and double ploughing using animal-drawn implements. The initial intention was to measure soil water changes and runoff losses from cropped plots under the four tillage practices. However, due to total crop failure, only soil water and runoff were measured from bare plots between December 2006 and April 2007. Runoff losses were highest under conventional ploughing. Planting basins retained most of the rainwater that fell during each rainfall event. The amount of rainfall received at each farm significantly influenced the volume of runoff water measured. Runoff water volume increased with increase in the amount of rainfall received at each farm. Soil water content was consistently higher under basin tillage than the other three tillage treatments. Significant differences in soil water content were observed across the farms according to soil types from sand to loamy sand. The basin tillage method gives a better control of water losses from the farmers’ fields. The planting basin tillage method has a greater potential for providing soil water to crops than ripper, double and single conventional ploughing practices.
Bhattacharya, D; Bhattacharya, R; Dhar, T K
1999-11-19
In an earlier communication we have described a novel signal amplification technology termed Super-CARD, which is able to significantly improve antigen detection sensitivity in conventional Dot-ELISA by approximately 10(5)-fold. The method utilizes hitherto unreported synthesized electron rich proteins containing multiple phenolic groups which, when immobilized over a solid phase as blocking agent, markedly increases the signal amplification capability of the existing CARD method (Bhattacharya, R., Bhattacharya, D., Dhar, T.K., 1999. A novel signal amplification technology based on catalyzed reporter deposition and its application in a Dot-ELISA with ultra high sensitivity. J. Immunol. Methods 227, 31.). In this paper we describe the utilization of this Super-CARD amplification technique in ELISA and its applicability for the rapid determination of aflatoxin B(1) (AFB(1)) in infected seeds. Using this method under identical conditions, the increase in absorbance over the CARD method was approximately 400%. The limit of detection of AFB(1) by this method was 0.1 pg/well, the sensitivity enhancement being 5-fold over the optimized CARD ELISA. Furthermore, the total incubation time was reduced to 16 min compared to 50 min for the CARD method. Assay specificity was not adversely affected and the amount of AFB(1) measured in seed extracts correlated well with the values obtained by conventional ELISA.
Keystroke dynamics in the pre-touchscreen era
Ahmad, Nasir; Szymkowiak, Andrea; Campbell, Paul A.
2013-01-01
Biometric authentication seeks to measure an individual’s unique physiological attributes for the purpose of identity verification. Conventionally, this task has been realized via analyses of fingerprints or signature iris patterns. However, whilst such methods effectively offer a superior security protocol compared with password-based approaches for example, their substantial infrastructure costs, and intrusive nature, make them undesirable and indeed impractical for many scenarios. An alternative approach seeks to develop similarly robust screening protocols through analysis of typing patterns, formally known as keystroke dynamics. Here, keystroke analysis methodologies can utilize multiple variables, and a range of mathematical techniques, in order to extract individuals’ typing signatures. Such variables may include measurement of the period between key presses, and/or releases, or even key-strike pressures. Statistical methods, neural networks, and fuzzy logic have often formed the basis for quantitative analysis on the data gathered, typically from conventional computer keyboards. Extension to more recent technologies such as numerical keypads and touch-screen devices is in its infancy, but obviously important as such devices grow in popularity. Here, we review the state of knowledge pertaining to authentication via conventional keyboards with a view toward indicating how this platform of knowledge can be exploited and extended into the newly emergent type-based technological contexts. PMID:24391568
Keystroke dynamics in the pre-touchscreen era.
Ahmad, Nasir; Szymkowiak, Andrea; Campbell, Paul A
2013-12-19
Biometric authentication seeks to measure an individual's unique physiological attributes for the purpose of identity verification. Conventionally, this task has been realized via analyses of fingerprints or signature iris patterns. However, whilst such methods effectively offer a superior security protocol compared with password-based approaches for example, their substantial infrastructure costs, and intrusive nature, make them undesirable and indeed impractical for many scenarios. An alternative approach seeks to develop similarly robust screening protocols through analysis of typing patterns, formally known as keystroke dynamics. Here, keystroke analysis methodologies can utilize multiple variables, and a range of mathematical techniques, in order to extract individuals' typing signatures. Such variables may include measurement of the period between key presses, and/or releases, or even key-strike pressures. Statistical methods, neural networks, and fuzzy logic have often formed the basis for quantitative analysis on the data gathered, typically from conventional computer keyboards. Extension to more recent technologies such as numerical keypads and touch-screen devices is in its infancy, but obviously important as such devices grow in popularity. Here, we review the state of knowledge pertaining to authentication via conventional keyboards with a view toward indicating how this platform of knowledge can be exploited and extended into the newly emergent type-based technological contexts.
Radon in unconventional natural gas from gulf coast geopressured-geothermal reservoirs
Kraemer, T.F.
1986-01-01
Radon-222 has been measured in natural gas produced from experimental geopressured-geothermal test wells. Comparison with published data suggests that while radon activity of this unconventional natural gas resource is higher than conventional gas produced in the gulf coast, it is within the range found for conventional gas produced throughout the U.S. A method of predicting the likely radon activity of this unconventional gas is described on the basis of the data presented, methane solubility, and known or assumed reservoir conditions of temperature, fluid pressure, and formation water salinity.
Quantitative and simultaneous non-invasive measurement of skin hydration and sebum levels
Ezerskaia, Anna; Pereira, S. F.; Urbach, H. Paul; Verhagen, Rieko; Varghese, Babu
2016-01-01
We report a method on quantitative and simultaneous non-contact in-vivo hydration and sebum measurements of the skin using an infrared optical spectroscopic set-up. The method utilizes differential detection with three wavelengths 1720, 1750, and 1770 nm, corresponding to the lipid vibrational bands that lay “in between” the prominent water absorption bands. We have used an emulsifier containing hydro- and lipophilic components to mix water and sebum in various volume fractions which was applied to the skin to mimic different oily-dry skin conditions. We also measured the skin sebum and hydration values on the forehead under natural conditions and its variations to external stimuli. Good agreement was found between our experimental results and reference values measured using conventional biophysical methods such as Corneometer and Sebumeter. PMID:27375946
A New Proposal to Redefine Kilogram by Measuring the Planck Constant Based on Inertial Mass
NASA Astrophysics Data System (ADS)
Liu, Yongmeng; Wang, Dawei
2018-04-01
A novel method to measure the Planck constant based on inertial mass is proposed here, which is distinguished from the conventional Kibble balance experiment which is based on the gravitational mass. The kilogram unit is linked to the Planck constant by calculating the difference of the parameters, i.e. resistance, voltage, velocity and time, which is measured in a two-mode experiment, unloaded mass mode and the loaded mass mode. In principle, all parameters measured in this experiment can reach a high accuracy, as that in Kibble balance experiment. This method has an advantage that some systematic error can be eliminated in difference calculation of measurements. In addition, this method is insensitive to air buoyancy and the alignment work in this experiment is easy. At last, the initial design of the apparatus is presented.
Historical perspective: The pros and cons of conventional outcome measures in Parkinson's disease.
Lim, Shen-Yang; Tan, Ai Huey
2018-01-01
Conventional outcome measures (COMs) in Parkinson's disease (PD) refer to rating scales, questionnaires, patient diaries and clinically-based tests that do not require specialized equipment. It is timely at this juncture - as clinicians and researchers begin to grapple with the "invasion" of digital technologies - to review the strengths and weaknesses of these outcome measures. This paper discusses advances (including an enhanced understanding of PD itself, and the development of clinimetrics as a field) that have led to improvements in the COMs used in PD; their strengths and limitations; and factors to consider when selecting and using a measuring instrument. It is envisaged that in the future, a combination of COMs and technology-based objective measures will be utilized, with different methods having their own strengths and weaknesses. Judgement is required on the part of the clinician and researcher in terms of which instrument(s) are appropriate to use, depending on the particular clinical or research setting or question. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Amako, Eri; Enjoji, Takaharu; Uchida, Satoshi; Tochikubo, Fumiyoshi
Constant monitoring and immediate control of fermentation processes have been required for advanced quality preservation in food industry. In the present work, simple estimation of metabolic states for heat-injured Escherichia coli (E. coli) in a micro-cell was investigated using dielectrophoretic impedance measurement (DEPIM) method. Temporal change in the conductance between micro-gap (ΔG) was measured for various heat treatment temperatures. In addition, the dependence of enzyme activity, growth capacity and membrane situation for E. coli on heat treatment temperature was also analyzed with conventional biological methods. Consequently, a correlation between ΔG and those biological properties was obtained quantitatively. This result suggests that DEPIM method will be available for an effective monitoring technique for complex change in various biological states of microorganisms.
Deng, Jie; Virmani, Sumeet; Young, Joseph; Harris, Kathleen; Yang, Guang-Yu; Rademaker, Alfred; Woloschak, Gayle; Omary, Reed A.; Larson, Andrew C.
2010-01-01
Purpose To test the hypothesis that diffusion-weighted (DW)-PROPELLER (periodically rotated overlapping parallel lines with enhanced reconstruction) MRI provides more accurate liver tumor necrotic fraction (NF) and viable tumor volume (VTV) measurements than conventional DW-SE-EPI (spin echo echo-planar imaging) methods. Materials and Methods Our institutional Animal Care and Use Committee approved all experiments. In six rabbits implanted with 10 VX2 liver tumors, DW-PROPELLER and DW-SE-EPI scans were performed at contiguous axial slice positions covering each tumor volume. Apparent diffusion coefficient maps of each tumor were used to generate spatially resolved tumor viability maps for NF and VTV measurements. We compared NF, whole tumor volume (WTV), and VTV measurements to corresponding reference standard histological measurements based on correlation and concordance coefficients and the Bland–Altman analysis. Results DW-PROPELLER generally improved image quality with less distortion compared to DW-SE-EPI. DW-PROPELLER NF, WTV, and VTV measurements were strongly correlated and satisfactorily concordant with histological measurements. DW-SE-EPI NF measurements were weakly correlated and poorly concordant with histological measurements. Bland–Altman analysis demonstrated that DWPROPELLER WTV and VTV measurements were less biased from histological measurements than the corresponding DW-SE-EPI measurements. Conclusion DW-PROPELLER MRI can provide spatially resolved liver tumor viability maps for accurate NF and VTV measurements, superior to DW-SE-EPI approaches. DWPROPELLER measurements may serve as a noninvasive surrogate for pathology, offering the potential for more accurate assessments of therapy response than conventional anatomic size measurements. PMID:18407540
Equilibrium gas-oil ratio measurements using a microfluidic technique.
Fisher, Robert; Shah, Mohammad Khalid; Eskin, Dmitry; Schmidt, Kurt; Singh, Anil; Molla, Shahnawaz; Mostowfi, Farshid
2013-07-07
A method for measuring the equilibrium GOR (gas-oil ratio) of reservoir fluids using microfluidic technology is developed. Live crude oils (crude oil with dissolved gas) are injected into a long serpentine microchannel at reservoir pressure. The fluid forms a segmented flow as it travels through the channel. Gas and liquid phases are produced from the exit port of the channel that is maintained at atmospheric conditions. The process is analogous to the production of crude oil from a formation. By using compositional analysis and thermodynamic principles of hydrocarbon fluids, we show excellent equilibrium between the produced gas and liquid phases is achieved. The GOR of a reservoir fluid is a key parameter in determining the equation of state of a crude oil. Equations of state that are commonly used in petroleum engineering and reservoir simulations describe the phase behaviour of a fluid at equilibrium state. Therefore, to accurately determine the coefficients of an equation of state, the produced gas and liquid phases have to be as close to the thermodynamic equilibrium as possible. In the examples presented here, the GORs measured with the microfluidic technique agreed with GOR values obtained from conventional methods. Furthermore, when compared to conventional methods, the microfluidic technique was simpler to perform, required less equipment, and yielded better repeatability.
Sherrod, S.K.; Belnap, J.; Miller, M.E.
2002-01-01
Four methods for measuring quantities of 12 plant-available nutrients were compared using three sandy soils in a series of three experiments. Three of the methods use different ion-exchange resin forms—bags, capsules, and membranes—and the fourth was conventional chemical extraction. The first experiment compared nutrient extraction data from a medium of sand saturated with a nutrient solution. The second and third experiments used Nakai and Sheppard series soils from Canyonlands National Park, which are relatively high in soil carbonates. The second experiment compared nutrient extraction data provided by the four methods from soils equilibrated at two temperatures, “warm” and “cold.” The third experiment extracted nutrients from the same soils in a field equilibration. Our results show that the four extraction techniques are not comparable. This conclusion is due to differences among the methods in the net quantities of nutrients extracted from equivalent soil volumes, in the proportional representation of nutrients within similar soils and treatments, in the measurement of nutrients that were added in known quantities, and even in the order of nutrients ranked by net abundance. We attribute the disparities in nutrient measurement among the different resin forms to interacting effects of the inherent differences in resin exchange capacity, differences among nutrients in their resin affinities, and possibly the relatively short equilibration time for laboratory trials. One constraint for measuring carbonate-related nutrients in high-carbonate soils is the conventional ammonium acetate extraction method, which we suspect of dissolving fine CaCO3 particles that are more abundant in Nakai series soils, resulting in erroneously high Ca2+ estimates. For study of plant-available nutrients, it is important to identify the nutrients of foremost interest and understand differences in their resin sorption dynamics to determine the most appropriate extraction method.
NASA Astrophysics Data System (ADS)
Saremi, Mohsen; Keyvani, Ahmad; Heydarzadeh Sohi, Mahmoud
Conventional and nanostructured zirconia coatings were deposited on In-738 Ni super alloy by atmospheric plasma spray technique. The hot corrosion resistance of the coatings was measured at 1050°C using an atmospheric electrical furnace and a fused mixture of vanadium pent oxide and sodium sulfate respectively. According to the experimental results nanostructured coatings showed a better hot corrosion resistance than conventional ones. The improved hot corrosion resistance could be explained by the change of structure to a dense and more packed structure in the nanocoating. The evaluation of mechanical properties by nano indentation method showed the hardness (H) and elastic modulus (E) of the YSZ coating increased substantially after hot corrosion.
NASA Technical Reports Server (NTRS)
Banerjee, S. K.
1984-01-01
It is impossible to carry out conventional paleointensity experiments requiring repeated heating and cooling to 770 C without chemical, physical or microstructural changes on lunar samples. Non-thermal methods of paleointensity determination have been sought: the two anhysteretic remanent magnetization (ARM) methods, and the saturation isothermal remanent magnetization (IRMS) method. Experimental errors inherent in these alternative approaches have been investigated to estimate the accuracy limits on the calculated paleointensities. Results are indicated in this report.
Wang, Junlong; Zhang, Ji; Wang, Xiaofang; Zhao, Baotang; Wu, Yiqian; Yao, Jian
2009-12-01
The conventional extraction methods for polysaccharides were time-consuming, laborious and energy-consuming. Microwave-assisted extraction (MAE) technique was employed for the extraction of Artemisia sphaerocephala polysaccharides (ASP), which is a traditional Chinese food. The extracting parameters were optimized by Box-Behnken design. In microwave heating process, a decrease in molecular weight (M(w)) was detected in SEC-LLS measurement. A d(f) value of 2.85 indicated ASP using MAE exhibited as a sphere conformation of branched clusters in aqueous solution. Furthermore, it showed stronger antioxidant activities compared with hot water extraction. The data obtained showed that the molecular weights played a more important role in antioxidant activities.
Diviš, Pavel; Kadlecová, Milada; Ouddane, Baghdad
2016-05-01
The distribution of mercury in surface water and in sediment from Deûle River in Northern France was studied by application of conventional sampling methods and by diffusive gradients in thin films technique (DGT). Concentration of total dissolved mercury in surface water was 20.8 ± 0.8 ng l(-1). The particulate mercury concentration was 6.2 ± 0.6 µg g(-1). The particulate mercury was accumulated in sediment (9.9 ± 2.3 mg kg(-1)), and it was transformed by methylating bacteria to methylmercury, mainly in the first 2-cm layer of the sediment. Total dissolved concentration of mercury in sediment pore water obtained by application of centrifugation extraction was 17.6 ± 4.1 ng l(-1), and it was comparable with total dissolved pore water mercury concentration measured by DGT probe containing Duolite GT-73 resin gel (18.2 ± 4.3 ng l(-1)), taking the sediment heterogeneity and different principles of the applied methods into account. By application of two DGT probes with different resin gels specific for mercury, it was found that approximately 30% of total dissolved mercury in sediment pore water was present in labile forms easy available for biota. The resolution of mercury DGT depth profiles was 0.5 cm, which allows, unlike conventional techniques, to study the connection of the geochemical cycle of mercury with geochemical cycles of iron and manganese.
Gillard, Montgomery; Wang, Timothy S; Boyd, Charles M; Dunn, Rodney L; Fader, Darrell J; Johnson, Timothy M
2002-08-01
To directly compare cosmetic improvement and postoperative sequelae resulting from dermabrasion of surgical scars with conventional motor-powered diamond fraise vs manual dermabrasion with medium-grade drywall sanding screen. Patients were randomly assigned to receive treatment with conventional diamond fraise dermabrasion to one half of the scar and manual dermabrasion with a drywall sanding screen to the other half in a prospective, comparative clinical study. Blinded observers assessed clinical variables during a 6-month follow-up period. University hospital/cancer center-based cutaneous surgery unit. Twenty-one healthy volunteers, Fitzpatrick skin type I to III, with contour irregularities resulting from granulation (7 patients) or reconstruction (14 patients) after skin cancer excision. One half of the patient's scar was treated with motor-powered diamond fraise dermabrasion and the other half was treated with manual dermabrasion with medium-grade drywall sanding screen. Correction of contour, scarline visibility, time to reepithelialization, presence or absence of milia, degree of postoperative erythema, hypertrophic scarring, patients' subjective reports of postoperative pain, and presence of pigmentary changes were observed for both methods. Standardized scoring systems were used to quantify outcome measures. According to the standardized scoring systems, no differences were found between the 2 methods at any point. In addition, no significant differences were found between the methods for any measure at any of the time points. Both dermabrasion techniques are equally effective in improving the cosmetic appearance of surgical scars.
A probability-based multi-cycle sorting method for 4D-MRI: A simulation study
Liang, Xiao; Yin, Fang-Fang; Liu, Yilin; Cai, Jing
2016-01-01
Purpose: To develop a novel probability-based sorting method capable of generating multiple breathing cycles of 4D-MRI images and to evaluate performance of this new method by comparing with conventional phase-based methods in terms of image quality and tumor motion measurement. Methods: Based on previous findings that breathing motion probability density function (PDF) of a single breathing cycle is dramatically different from true stabilized PDF that resulted from many breathing cycles, it is expected that a probability-based sorting method capable of generating multiple breathing cycles of 4D images may capture breathing variation information missing from conventional single-cycle sorting methods. The overall idea is to identify a few main breathing cycles (and their corresponding weightings) that can best represent the main breathing patterns of the patient and then reconstruct a set of 4D images for each of the identified main breathing cycles. This method is implemented in three steps: (1) The breathing signal is decomposed into individual breathing cycles, characterized by amplitude, and period; (2) individual breathing cycles are grouped based on amplitude and period to determine the main breathing cycles. If a group contains more than 10% of all breathing cycles in a breathing signal, it is determined as a main breathing pattern group and is represented by the average of individual breathing cycles in the group; (3) for each main breathing cycle, a set of 4D images is reconstructed using a result-driven sorting method adapted from our previous study. The probability-based sorting method was first tested on 26 patients’ breathing signals to evaluate its feasibility of improving target motion PDF. The new method was subsequently tested for a sequential image acquisition scheme on the 4D digital extended cardiac torso (XCAT) phantom. Performance of the probability-based and conventional sorting methods was evaluated in terms of target volume precision and accuracy as measured by the 4D images, and also the accuracy of average intensity projection (AIP) of 4D images. Results: Probability-based sorting showed improved similarity of breathing motion PDF from 4D images to reference PDF compared to single cycle sorting, indicated by the significant increase in Dice similarity coefficient (DSC) (probability-based sorting, DSC = 0.89 ± 0.03, and single cycle sorting, DSC = 0.83 ± 0.05, p-value <0.001). Based on the simulation study on XCAT, the probability-based method outperforms the conventional phase-based methods in qualitative evaluation on motion artifacts and quantitative evaluation on tumor volume precision and accuracy and accuracy of AIP of the 4D images. Conclusions: In this paper the authors demonstrated the feasibility of a novel probability-based multicycle 4D image sorting method. The authors’ preliminary results showed that the new method can improve the accuracy of tumor motion PDF and the AIP of 4D images, presenting potential advantages over the conventional phase-based sorting method for radiation therapy motion management. PMID:27908178
A paired-laser photogrammetric method for in situ length measurement of benthic fishes
Rizzo, Austin A.; Welsh, Stuart A.; Thompson, Patricia A.
2017-01-01
Photogrammetry, a technique to obtain measurements from photographs, may be a valid method for measuring lengths of rare, threatened, or endangered species. Photogrammetric methods of measurement are nonintrusive and reduce the possibility of physical damage or physiological stress associated with the capture and handling of individuals. We evaluated precision and accuracy of photogrammetric length measurements relative to board measurements of Greenside Darters Etheostoma blennioides and Variegate Darters E. variatum in an aquarium and applied photogrammetry in a field study of the Diamond Darter Crystallaria cincotta, a federally listed endangered species. Digital photographs were taken of each individual using a waterproof camera equipped with two parallel lasers. Photogrammetric length measurements were digitized with ImageJ software. Agreement between board and photogrammetric measurements were high for Greenside and Variegate darters. The magnitude of differences was small between direct and photogrammetric measurements, ranging from 0.6% to 3.1%, depending on the species measured and the type of measurement taken. These results support photogrammetry as a useful method for obtaining length measurements of benthic stream fishes. Photogrammetric methods allowed for length measurements and an assessment of length frequency of 199 Diamond Darters, informative data for management that could not be collected with conventional measuring-board methods.
Al-Omiri, Mahmoud K; Harb, Rousan; Abu Hammad, Osama A; Lamey, Philip-John; Lynch, Edward; Clifford, Thomas J
2010-07-01
This study aimed to evaluate the reliability of a new CAD-CAM Laser scanning machine in detection of incisal tooth wear through a 6-month period and to compare the accuracy of using this new machine against measuring tooth wear using tool maker microscope and conventional tooth wear index. Twenty participants (11 males and 9 females, mean age=22.7 years, SD=2.0) were assessed for incisal tooth wear of lower anterior teeth using Smith and Knight clinical tooth wear index (TWI) on two occasions, the study baseline and 6 months later. Stone dies for each tooth were prepared and scanned using the CAD-CAM Laser Cercon System (Cercon Smart Ceramics, DeguDent, Germany). Scanned images were printed and examined under a toolmaker microscope (Stedall-Dowding Machine Tool Company, Optique et Mecanique de Precision, Marcel Aubert SA, Switzerland) to quantify tooth wear and then the dies were directly assessed under the microscope to measure tooth wear. The Wilcoxon Signed Ranks Test was used to analyse the data. TWI scores for incisal edges were 0, 1, and 2 and were similar at both occasions. Scores 3 and 4 were not detected. Wear values measured by directly assessing the dies under the tool maker microscope (range=517-656microm, mean=582microm, and SD=50) were significantly more than those measured from the Cercon digital machine images (range=132-193microm, mean =165microm, and SD=27) and both showed significant differences between the two occasions. Measuring images obtained with Cercon digital machine under tool maker microscope allowed detection of wear progression over the 6-month period. However, measuring the dies of worn dentition directly under the tool maker microscope enabled detection of wear progression more accurately. Conventional method was the least sensitive for tooth wear quantification and was unable to identify wear progression in most cases. Copyright 2010 Elsevier Ltd. All rights reserved.
On-line noninvasive one-point measurements of pulse wave velocity.
Harada, Akimitsu; Okada, Takashi; Niki, Kiyomi; Chang, Dehua; Sugawara, Motoaki
2002-12-01
Pulse wave velocity (PWV) is a basic parameter in the dynamics of pressure and flow waves traveling in arteries. Conventional on-line methods of measuring PWV have mainly been based on "two-point" measurements, i.e., measurements of the time of travel of the wave over a known distance. This paper describes two methods by which on-line "one-point" measurements can be made, and compares the results obtained by the two methods. The principle of one method is to measure blood pressure and velocity at a point, and use the water-hammer equation for forward traveling waves. The principle of the other method is to derive PWV from the stiffness parameter of the artery. Both methods were realized by using an ultrasonic system which we specially developed for noninvasive measurements of wave intensity. We applied the methods to the common carotid artery in 13 normal humans. The regression line of the PWV (m/s) obtained by the former method on the PWV (m/s) obtained by the latter method was y = 1.03x - 0.899 (R(2) = 0.83). Although regional PWV in the human carotid artery has not been reported so far, the correlation between the PWVs obtained by the present two methods was so high that we are convinced of the validity of these methods.
Conventionalism and Methodological Standards in Contending with Skepticism about Uncertainty
NASA Astrophysics Data System (ADS)
Brumble, K. C.
2012-12-01
What it means to measure and interpret confidence and uncertainty in a result is often particular to a specific scientific community and its methodology of verification. Additionally, methodology in the sciences varies greatly across disciplines and scientific communities. Understanding the accuracy of predictions of a particular science thus depends largely upon having an intimate working knowledge of the methods, standards, and conventions utilized and underpinning discoveries in that scientific field. Thus, valid criticism of scientific predictions and discoveries must be conducted by those who are literate in the field in question: they must have intimate working knowledge of the methods of the particular community and of the particular research under question. The interpretation and acceptance of uncertainty is one such shared, community-based convention. In the philosophy of science, this methodological and community-based way of understanding scientific work is referred to as conventionalism. By applying the conventionalism of historian and philosopher of science Thomas Kuhn to recent attacks upon methods of multi-proxy mean temperature reconstructions, I hope to illuminate how climate skeptics and their adherents fail to appreciate the need for community-based fluency in the methodological standards for understanding uncertainty shared by the wider climate science community. Further, I will flesh out a picture of climate science community standards of evidence and statistical argument following the work of philosopher of science Helen Longino. I will describe how failure to appreciate the conventions of professionalism and standards of evidence accepted in the climate science community results in the application of naïve falsification criteria. Appeal to naïve falsification in turn has allowed scientists outside the standards and conventions of the mainstream climate science community to consider themselves and to be judged by climate skeptics as valid critics of particular statistical reconstructions with naïve and misapplied methodological criticism. Examples will include the skeptical responses to multi-proxy mean temperature reconstructions and congressional hearings criticizing the work of Michael Mann et al.'s Hockey Stick.
Apparatus and method for characterizing ultrafast polarization varying optical pulses
Smirl, A.; Trebino, R.P.
1999-08-10
Practical techniques are described for characterizing ultrafast potentially ultraweak, ultrashort optical pulses. The techniques are particularly suited to the measurement of signals from nonlinear optical materials characterization experiments, whose signals are generally too weak for full characterization using conventional techniques. 2 figs.
NASA Technical Reports Server (NTRS)
Achtemeier, Gary L.; Kidder, Stanley Q.; Scott, Robert W.
1988-01-01
The variational multivariate assimilation method described in a companion paper by Achtemeier and Ochs is applied to conventional and conventional plus satellite data. Ground-based and space-based meteorological data are weighted according to the respective measurement errors and blended into a data set that is a solution of numerical forms of the two nonlinear horizontal momentum equations, the hydrostatic equation, and an integrated continuity equation for a dry atmosphere. The analyses serve first, to evaluate the accuracy of the model, and second to contrast the analyses with and without satellite data. Evaluation criteria measure the extent to which: (1) the assimilated fields satisfy the dynamical constraints, (2) the assimilated fields depart from the observations, and (3) the assimilated fields are judged to be realistic through pattern analysis. The last criterion requires that the signs, magnitudes, and patterns of the hypersensitive vertical velocity and local tendencies of the horizontal velocity components be physically consistent with respect to the larger scale weather systems.
Noise suppression due to annulus shaping of conventional coaxial nozzle
NASA Technical Reports Server (NTRS)
Vonglahn, U.; Goodykoontz, J.
1980-01-01
A method which shows that increasing the annulus width of a conventional coaxial nozzle with constant bypass velocity will lower the noise level is described. The method entails modifying a concentric coaxial nozzle to provide an eccentric outer stream annulus while maintaining approximately the same through flow as that for the original concentric bypass nozzle. Acoustical tests to determine the noise generating characteristics of the nozzle over a range of flow conditions are described. The tests involved sequentially analyzing the noise signals and digitally recording the 1/3 octave band sound pressure levels. The measurements were made in a plane passing through the minimum and maximum annulus width points, as well as at 90 degrees in this plane, by rotating the outer nozzle about its axis. Representative measured spectral data in the flyover plane for the concentric nozzle obtained at model scale are discussed. Representative spectra for several engine cycles are presented for both the eccentric and concentric nozzles at engine size.
A double sealing technique for increasing the precision of headspace-gas chromatographic analysis.
Xie, Wei-Qi; Yu, Kong-Xian; Gong, Yi-Xian
2018-01-19
This paper investigates a new double sealing technique for increasing the precision of the headspace gas chromatographic method. The air leakage problem caused by the high pressure in the headspace vial during the headspace sampling process has a great impact to the measurement precision in the conventional headspace analysis (i.e., single sealing technique). The results (using ethanol solution as the model sample) show that the present technique is effective to minimize such a problem. The double sealing technique has an excellent measurement precision (RSD < 0.15%) and accuracy (recovery = 99.1%-100.6%) for the ethanol quantification. The detection precision of the present method was 10-20 times higher than that in earlier HS-GC work that use conventional single sealing technique. The present double sealing technique may open up a new avenue, and also serve as a general strategy for improving the performance (i.e., accuracy and precision) of headspace analysis of various volatile compounds. Copyright © 2017 Elsevier B.V. All rights reserved.
Reversing the conventional leather processing sequence for cleaner leather production.
Saravanabhavan, Subramani; Thanikaivelan, Palanisamy; Rao, Jonnalagadda Raghava; Nair, Balachandran Unni; Ramasami, Thirumalachari
2006-02-01
Conventional leather processing generally involves a combination of single and multistep processes that employs as well as expels various biological, inorganic, and organic materials. It involves nearly 14-15 steps and discharges a huge amount of pollutants. This is primarily due to the fact that conventional leather processing employs a "do-undo" process logic. In this study, the conventional leather processing steps have been reversed to overcome the problems associated with the conventional method. The charges of the skin matrix and of the chemicals and pH profiles of the process have been judiciously used for reversing the process steps. This reversed process eventually avoids several acidification and basification/neutralization steps used in conventional leather processing. The developed process has been validated through various analyses such as chromium content, shrinkage temperature, softness measurements, scanning electron microscopy, and physical testing of the leathers. Further, the performance of the leathers is shown to be on par with conventionally processed leathers through bulk property evaluation. The process enjoys a significant reduction in COD and TS by 53 and 79%, respectively. Water consumption and discharge is reduced by 65 and 64%, respectively. Also, the process benefits from significant reduction in chemicals, time, power, and cost compared to the conventional process.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Santi, C. de; Meneghini, M., E-mail: matteo.meneghini@dei.unipd.it; Meneghesso, G.
2014-08-18
With this paper we propose a test method for evaluating the dynamic performance of GaN-based transistors, namely, gate-frequency sweep measurements: the effectiveness of the method is verified by characterizing the dynamic performance of Gate Injection Transistors. We demonstrate that this method can provide an effective description of the impact of traps on the transient performance of Heterojunction Field Effect Transistors, and information on the properties (activation energy and cross section) of the related defects. Moreover, we discuss the relation between the results obtained by gate-frequency sweep measurements and those collected by conventional drain current transients and double pulse characterization.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cervino, L; Soultan, D; Pettersson, N
2016-06-15
Purpose: to evaluate the dosimetric and radiobiological consequences from having different gating windows, dose rates, and breathing patterns in gated VMAT lung radiotherapy. Methods: A novel 3D-printed moving phantom with central high and peripheral low tracer uptake regions was 4D FDG-PET/CT-scanned using ideal, patient-specific regular, and irregular breathing patterns. A scan of the stationary phantom was obtained as a reference. Target volumes corresponding to different uptake regions were delineated. Simultaneous integrated boost (SIB) 6 MV VMAT plans were produced for conventional and hypofractionated radiotherapy, using 30–70 and 100% cycle gating scenarios. Prescribed doses were 200 cGy with SIB to 240more » cGy to high uptake volume for conventional, and 800 with SIB to 900 cGy for hypofractionated plans. Dose rates of 600 MU/min (conventional and hypofractionated) and flattening filter free 1400 MU/min (hypofractionated) were used. Ion chamber measurements were performed to verify delivered doses. Vials with A549 cells placed in locations matching ion chamber measurements were irradiated using the same plans to measure clonogenic survival. Differences in survival for the different doses, dose rates, gating windows, and breathing patterns were analyzed. Results: Ion chamber measurements agreed within 3% of the planned dose, for all locations, breathing patterns and gating windows. Cell survival depended on dose alone, and not on gating window, breathing pattern, MU rate, or delivery time. The surviving fraction varied from approximately 40% at 2Gy to 1% for 9 Gy and was within statistical uncertainty relative to that observed for the stationary phantom. Conclusions: Use of gated VMAT in PET-driven SIB radiotherapy was validated using ion chamber measurements and cell survival assays for conventional and hypofractionated radiotherapy.« less
Dietary assessment and self-monitoring with nutrition applications for mobile devices.
Lieffers, Jessica R L; Hanning, Rhona M
2012-01-01
Nutrition applications for mobile devices (e.g., personal digital assistants, smartphones) are becoming increasingly accessible and can assist with the difficult task of intake recording for dietary assessment and self-monitoring. This review is a compilation and discussion of research on this tool for dietary intake documentation in healthy populations and those trying to lose weight. The purpose is to compare this tool with conventional methods (e.g., 24-hour recall interviews, paper-based food records). Research databases were searched from January 2000 to April 2011, with the following criteria: healthy or weight loss populations, use of a mobile device nutrition application, and inclusion of at least one of three measures, which were the ability to capture dietary intake in comparison with conventional methods, dietary self-monitoring adherence, and changes in anthropometrics and/or dietary intake. Eighteen studies are discussed. Two application categories were identified: those with which users select food and portion size from databases and those with which users photograph their food. Overall, positive feedback was reported with applications. Both application types had moderate to good correlations for assessing energy and nutrient intakes in comparison with conventional methods. For self-monitoring, applications versus conventional techniques (often paper records) frequently resulted in better self-monitoring adherence, and changes in dietary intake and/or anthropometrics. Nutrition applications for mobile devices have an exciting potential for use in dietetic practice.
[Surface characteristics of the acrylic resins according to the polishing methods].
Vitalariu, Anca Mihaela; Lazăr, Lenuţa; Buruiană, Tinca; Diaconu, Diana; Tatarciuc, Monica Silvia
2011-01-01
The objective of this study was to evaluate the effect of the polishing technique and glazing on the porosity of the dental resins. The studied resins were: Castapress/Vertex, Prothyl Hot/Zermack, Rapid Simplified/Vertex, Duracryl Plus/Spofa Dental, Vertex-Soft/Vertex, Superacryl Plus/Spofa Dental. Thirty specimens, five for every resin, of 50/25/2 cm in size were done. One surface of each sample was polished with extrahard tungsten carbide burs, the other surface being polished with extrahard extrafine and diamond burs. The final polishing was done using a conventional method: pumice, water and lathe bristle brush for 90 seconds, 1500 rpm and soft leather polishing wheel for 90 seconds, 3000 rpm. Twenty surfaces were glazed after polishing with Glaze/Bosworth. Vertex Soft specimens were not polished because this is a resilient material. Surface porosity of the acrylic resin specimens was measured by optical microscopy. The lowest porosity was obtained by conventional polishing combined with glazing techniques. No differences between glazed and non/glazed self-curing resin specimens were noticed, but there were differences between self-curing and heat-curing resins. Conventional lathe polishing method is an effective and reliable technique for polishing dental resins. Specimens of self-curing resin had a higher porosity than heat curing resin following the same surface treatment. Higher surface smoothness was obtained by conventional lathe polishing completed by glazing.
Scanning Electron Microscope-Cathodoluminescence Analysis of Rare-Earth Elements in Magnets.
Imashuku, Susumu; Wagatsuma, Kazuaki; Kawai, Jun
2016-02-01
Scanning electron microscope-cathodoluminescence (SEM-CL) analysis was performed for neodymium-iron-boron (NdFeB) and samarium-cobalt (Sm-Co) magnets to analyze the rare-earth elements present in the magnets. We examined the advantages of SEM-CL analysis over conventional analytical methods such as SEM-energy-dispersive X-ray (EDX) spectroscopy and SEM-wavelength-dispersive X-ray (WDX) spectroscopy for elemental analysis of rare-earth elements in NdFeB magnets. Luminescence spectra of chloride compounds of elements in the magnets were measured by the SEM-CL method. Chloride compounds were obtained by the dropwise addition of hydrochloric acid on the magnets followed by drying in vacuum. Neodymium, praseodymium, terbium, and dysprosium were separately detected in the NdFeB magnets, and samarium was detected in the Sm-Co magnet by the SEM-CL method. In contrast, it was difficult to distinguish terbium and dysprosium in the NdFeB magnet with a dysprosium concentration of 1.05 wt% by conventional SEM-EDX analysis. Terbium with a concentration of 0.02 wt% in an NdFeB magnet was detected by SEM-CL analysis, but not by conventional SEM-WDX analysis. SEM-CL analysis is advantageous over conventional SEM-EDX and SEM-WDX analyses for detecting trace rare-earth elements in NdFeB magnets, particularly dysprosium and terbium.
Robert J. Ross; Roy F. Pellerin; Norbert Volny; William W. Salsig; Robert H. Falk
1999-01-01
This guide was prepared to assist inspectors in the use of stress wave timing instruments and the various methods of locating and defining areas of decay in timber bridge members. The first two sections provide (a) background information regarding conventional methods to locate and measure decay in timber bridges and (b) the principles of stress wave nondestructive...
Robert Ross; Roy F. Pellerin; Norbert Volny; William W. Salsig; Robert H. Falk
2000-01-01
This guide was prepared to assist inspectors in the use of stress wave timing instruments and various methods of locating and defining areas of decay in timber members in historic structures. The first two sections provide (a) background information regarding conventional methods to locate and measure decay in historic structures and (b) the principles of stress wave...
Multiscale analysis of heart rate dynamics: entropy and time irreversibility measures.
Costa, Madalena D; Peng, Chung-Kang; Goldberger, Ary L
2008-06-01
Cardiovascular signals are largely analyzed using traditional time and frequency domain measures. However, such measures fail to account for important properties related to multiscale organization and non-equilibrium dynamics. The complementary role of conventional signal analysis methods and emerging multiscale techniques, is, therefore, an important frontier area of investigation. The key finding of this presentation is that two recently developed multiscale computational tools--multiscale entropy and multiscale time irreversibility--are able to extract information from cardiac interbeat interval time series not contained in traditional methods based on mean, variance or Fourier spectrum (two-point correlation) techniques. These new methods, with careful attention to their limitations, may be useful in diagnostics, risk stratification and detection of toxicity of cardiac drugs.
Multiscale Analysis of Heart Rate Dynamics: Entropy and Time Irreversibility Measures
Peng, Chung-Kang; Goldberger, Ary L.
2016-01-01
Cardiovascular signals are largely analyzed using traditional time and frequency domain measures. However, such measures fail to account for important properties related to multiscale organization and nonequilibrium dynamics. The complementary role of conventional signal analysis methods and emerging multiscale techniques, is, therefore, an important frontier area of investigation. The key finding of this presentation is that two recently developed multiscale computational tools— multiscale entropy and multiscale time irreversibility—are able to extract information from cardiac interbeat interval time series not contained in traditional methods based on mean, variance or Fourier spectrum (two-point correlation) techniques. These new methods, with careful attention to their limitations, may be useful in diagnostics, risk stratification and detection of toxicity of cardiac drugs. PMID:18172763
Natural leathers from natural materials: progressing toward a new arena in leather processing.
Saravanabhavan, Subramani; Thanikaivelan, Palanisamy; Rao, Jonnalagadda Raghava; Nair, Balachandran Unni; Ramasami, Thirumalachari
2004-02-01
Globally, the leather industry is currently undergoing radical transformation due to pollution and discharge legislations. Thus, the leather industry is pressurized to look for cleaner options for processing the raw hides and skins. Conventional methods of pre-tanning, tanning and post-tanning processes are known to contribute more than 98% of the total pollution load from the leather processing. The conventional method of the tanning process involves the "do-undo" principle. Furthermore, the conventional methods employed in leather processing subject the skin/ hide to a wide variation in pH (2.8-13.0). This results in the emission of huge amounts of pollution loads such as BOD, COD, TDS, TS, sulfates, chlorides and chromium. In the approach illustrated here, the hair and flesh removal as well as fiber opening have been achieved using biocatalysts at pH 8.0, pickle-free natural tanning employing vegetable tannins, and post-tanning using environmentally friendly chemicals. Hence, this process involves dehairing, fiber opening, and pickle-free natural tanning followed by ecofriendly post-tanning. It has been found that the extent of hair removal and opening up of fiber bundles is comparable to that of conventionally processed leathers. This has been substantiated through scanning electron microscopic analysis and softness measurements. Performance of the leathers is shown to be on par with conventionally chrome-tanned leathers through physical and hand evaluation. The process also exhibits zero metal (chromium) discharge and significant reduction in BOD, COD, TDS, and TS loads by 83, 69, 96, and 96%, respectively. Furthermore, the developed process seems to be economically viable.
Obermayr, U; Rose, A; Geier, M
2010-11-01
We have developed a novel test cage and improved method for the evaluation of mosquito repellents. The method is compatible with the United States Environmental Protection Agency, 2000 draft OPPTS 810.3700 Product Performance Test Guidelines for Testing of Insect Repellents. The Biogents cages (BG-cages) require fewer test mosquitoes than conventional cages and are more comfortable for the human volunteers. The novel cage allows a section of treated forearm from a volunteer to be exposed to mosquito probing through a window. This design minimizes residual contamination of cage surfaces with repellent. In addition, an air ventilation system supplies conditioned air to the cages after each single test, to flush out and prevent any accumulation of test substances. During biting activity tests, the untreated skin surface does not receive bites because of a screen placed 150 mm above the skin. Compared with the OPPTS 810.3700 method, the BG-cage is smaller (27 liters, compared with 56 liters) and contains 30 rather than hundreds of blood-hungry female mosquitoes. We compared the performance of a proprietary repellent formulation containing 20% KBR3023 with four volunteers on Aedes aegypti (L.) (Diptera: Culicidae) in BG- and conventional cages. Repellent protection time was shorter in tests conducted with conventional cages. The average 95% protection time was 4.5 +/- 0.4 h in conventional cages and 7.5 +/- 0.6 h in the novel BG-cages. The protection times measured in BG-cages were more similar to the protection times determined with these repellents in field tests.
Oppugning the assumptions of spatial averaging of segment and joint orientations.
Pierrynowski, Michael Raymond; Ball, Kevin Arthur
2009-02-09
Movement scientists frequently calculate "arithmetic averages" when examining body segment or joint orientations. Such calculations appear routinely, yet are fundamentally flawed. Three-dimensional orientation data are computed as matrices, yet three-ordered Euler/Cardan/Bryant angle parameters are frequently used for interpretation. These parameters are not geometrically independent; thus, the conventional process of averaging each parameter is incorrect. The process of arithmetic averaging also assumes that the distances between data are linear (Euclidean); however, for the orientation data these distances are geodesically curved (Riemannian). Therefore we question (oppugn) whether use of the conventional averaging approach is an appropriate statistic. Fortunately, exact methods of averaging orientation data have been developed which both circumvent the parameterization issue, and explicitly acknowledge the Euclidean or Riemannian distance measures. The details of these matrix-based averaging methods are presented and their theoretical advantages discussed. The Euclidian and Riemannian approaches offer appealing advantages over the conventional technique. With respect to practical biomechanical relevancy, examinations of simulated data suggest that for sets of orientation data possessing characteristics of low dispersion, an isotropic distribution, and less than 30 degrees second and third angle parameters, discrepancies with the conventional approach are less than 1.1 degrees . However, beyond these limits, arithmetic averaging can have substantive non-linear inaccuracies in all three parameterized angles. The biomechanics community is encouraged to recognize that limitations exist with the use of the conventional method of averaging orientations. Investigations requiring more robust spatial averaging over a broader range of orientations may benefit from the use of matrix-based Euclidean or Riemannian calculations.
Koopman, Richelle J.; Kochendorfer, Karl M.; Moore, Joi L.; Mehr, David R.; Wakefield, Douglas S.; Yadamsuren, Borchuluun; Coberly, Jared S.; Kruse, Robin L.; Wakefield, Bonnie J.; Belden, Jeffery L.
2011-01-01
PURPOSE We compared use of a new diabetes dashboard screen with use of a conventional approach of viewing multiple electronic health record (EHR) screens to find data needed for ambulatory diabetes care. METHODS We performed a usability study, including a quantitative time study and qualitative analysis of information-seeking behaviors. While being recorded with Morae Recorder software and “think-aloud” interview methods, 10 primary care physicians first searched their EHR for 10 diabetes data elements using a conventional approach for a simulated patient, and then using a new diabetes dashboard for another. We measured time, number of mouse clicks, and accuracy. Two coders analyzed think-aloud and interview data using grounded theory methodology. RESULTS The mean time needed to find all data elements was 5.5 minutes using the conventional approach vs 1.3 minutes using the diabetes dashboard (P <.001). Physicians correctly identified 94% of the data requested using the conventional method, vs 100% with the dashboard (P <.01). The mean number of mouse clicks was 60 for conventional searching vs 3 clicks with the diabetes dashboard (P <.001). A common theme was that in everyday practice, if physicians had to spend too much time searching for data, they would either continue without it or order a test again. CONCLUSIONS Using a patient-specific diabetes dashboard improves both the efficiency and accuracy of acquiring data needed for high-quality diabetes care. Usability analysis tools can provide important insights into the value of optimizing physician use of health information technologies. PMID:21911758
NASA Astrophysics Data System (ADS)
Ri, Shien; Tsuda, Hiroshi; Yoshida, Takeshi; Umebayashi, Takashi; Sato, Akiyoshi; Sato, Eiichi
2015-07-01
Optical methods providing full-field deformation data have potentially enormous interest for mechanical engineers. In this study, an in-plane and out-of-plane displacement measurement method based on a dual-camera imaging system is proposed. The in-plane and out-of-plane displacements are determined simultaneously using two measured in-plane displacement data observed from two digital cameras at different view angles. The fundamental measurement principle and experimental results of accuracy confirmation are presented. In addition, we applied this method to the displacement measurement in a static loading and bending test of a solid rocket motor case (CFRP material; 2.2 m diameter and 2.3 m long) for an up-to-date Epsilon rocket developed by JAXA. The effectiveness and measurement accuracy is confirmed by comparing with conventional displacement sensor. This method could be useful to diagnose the reliability of large-scale space structures in the rocket development.
Injection molding lens metrology using software configurable optical test system
NASA Astrophysics Data System (ADS)
Zhan, Cheng; Cheng, Dewen; Wang, Shanshan; Wang, Yongtian
2016-10-01
Optical plastic lens produced by injection molding machine possesses numerous advantages of light quality, impact resistance, low cost, etc. The measuring methods in the optical shop are mainly interferometry, profile meter. However, these instruments are not only expensive, but also difficult to alignment. The software configurable optical test system (SCOTS) is based on the geometry of the fringe refection and phase measuring deflectometry method (PMD), which can be used to measure large diameter mirror, aspheric and freeform surface rapidly, robustly, and accurately. In addition to the conventional phase shifting method, we propose another data collection method called as dots matrix projection. We also use the Zernike polynomials to correct the camera distortion. This polynomials fitting mapping distortion method has not only simple operation, but also high conversion precision. We simulate this test system to measure the concave surface using CODE V and MATLAB. The simulation results show that the dots matrix projection method has high accuracy and SCOTS has important significance for on-line detection in optical shop.
Zink, F E; McCollough, C H
1994-08-01
The unique geometry of electron-beam CT (EBCT) scanners produces radiation dose profiles with widths which can be considerably different from the corresponding nominal scan width. Additionally, EBCT scanners produce both complex (multiple-slice) and narrow (3 mm) radiation profiles. This work describes the measurement of the axial dose distribution from EBCT within a scattering phantom using film dosimetry methods, which offer increased convenience and spatial resolution compared to thermoluminescent dosimetry (TLD) techniques. Therapy localization film was cut into 8 x 220 mm strips and placed within specially constructed light-tight holders for placement within the cavities of a CT Dose Index (CTDI) phantom. The film was calibrated using a conventional overhead x-ray tube with spectral characteristics matched to the EBCT scanner (130 kVp, 10 mm A1 HVL). The films were digitized at five samples per mm and calibrated dose profiles plotted as a function of z-axis position. Errors due to angle-of-incidence and beam hardening were estimated to be less than 5% and 10%, respectively. The integral exposure under film dose profiles agreed with ion-chamber measurements to within 15%. Exposures measured along the radiation profile differed from TLD measurements by an average of 5%. The film technique provided acceptable accuracy and convenience in comparison to conventional TLD methods, and allowed high spatial-resolution measurement of EBCT radiation dose profiles.
Determination of Peukert's Constant Using Impedance Spectroscopy: Application to Supercapacitors.
Mills, Edmund Martin; Kim, Sangtae
2016-12-15
Peukert's equation is widely used to model the rate dependence of battery capacity, and has recently attracted attention for application to supercapacitors. Here we present a newly developed method to readily determine Peukert's constant using impedance spectroscopy. Impedance spectroscopy is ideal for this purpose as it has the capability of probing electrical performance of a device over a wide range of time-scales within a single measurement. We demonstrate that the new method yields consistent results with conventional galvanostatic measurements through applying it to commercially available supercapacitors. Additionally, the novel method is much simpler and more precise, making it an attractive alternative for the determination of Peukert's constant.
Combined pulse-oximeter-NIRS system for biotissue diagnostics
NASA Astrophysics Data System (ADS)
Hovhannisyan, Vladimir A.
2005-08-01
Multi-wavelength (670, 805, 848 and 905 nm), multi-detector device for non-invasive measurement of biochemical components concentration in human or animal tissues, combining the methods of conventional pulse-oximetry and near infrared spectroscopy, is developed. The portable and clinically applicable system allows to measure heart pulse rate, oxygen saturation of arterial hemoglobin (pulse-oximetry method) and local absolute concentration of oxyhemoglobin, deoxyhemoglobin and oxidized cytochrome aa3 or other IR absorbed compounds (NIRS method). The system can be applied in monitoring of oxygen availability and utilization by the brain in neonatal and adults, neuro- traumatology, intensive care medicine, transplantation and plastic surgery, in sport, high-altitude and aviation medicine.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Attota, Ravikiran, E-mail: Ravikiran.attota@nist.gov; Dixson, Ronald G.
We experimentally demonstrate that the three-dimensional (3-D) shape variations of nanometer-scale objects can be resolved and measured with sub-nanometer scale sensitivity using conventional optical microscopes by analyzing 4-D optical data using the through-focus scanning optical microscopy (TSOM) method. These initial results show that TSOM-determined cross-sectional (3-D) shape differences of 30 nm–40 nm wide lines agree well with critical-dimension atomic force microscope measurements. The TSOM method showed a linewidth uncertainty of 1.22 nm (k = 2). Complex optical simulations are not needed for analysis using the TSOM method, making the process simple, economical, fast, and ideally suited for high volume nanomanufacturing process monitoring.
Biomedical Implementation of Liquid Metal Ink as Drawable ECG Electrode and Skin Circuit
Yu, Yang; Zhang, Jie; Liu, Jing
2013-01-01
Background Conventional ways of making bio-electrodes are generally complicated, expensive and unconformable. Here we describe for the first time the method of applying Ga-based liquid metal ink as drawable electrocardiogram (ECG) electrodes. Such material owns unique merits in both liquid phase conformability and high electrical conductivity, which provides flexible ways for making electrical circuits on skin surface and a prospective substitution of conventional rigid printed circuit boards (PCBs). Methods Fundamental measurements of impedance and polarization voltage of the liquid metal ink were carried out to evaluate its basic electrical properties. Conceptual experiments were performed to draw the alloy as bio-electrodes to acquire ECG signals from both rabbit and human via a wireless module developed on the mobile phone. Further, a typical electrical circuit was drawn in the palm with the ink to demonstrate its potential of implementing more sophisticated skin circuits. Results With an oxide concentration of 0.34%, the resistivity of the liquid metal ink was measured as 44.1 µΩ·cm with quite low reactance in the form of straight line. Its peak polarization voltage with the physiological saline was detected as −0.73 V. The quality of ECG wave detected from the liquid metal electrodes was found as good as that of conventional electrodes, from both rabbit and human experiments. In addition, the circuit drawn with the liquid metal ink in the palm also runs efficiently. When the loop was switched on, all the light emitting diodes (LEDs) were lit and emitted colorful lights. Conclusions The liquid metal ink promises unique printable electrical properties as both bio-electrodes and electrical wires. The implemented ECG measurement on biological surface and the successfully run skin circuit demonstrated the conformability and attachment of the liquid metal. The present method is expected to innovate future physiological measurement and biological circuit manufacturing technique in a large extent. PMID:23472220
Klimowicz, M D; Nizanski, W; Batkowski, F; Savic, M A
2008-07-01
The aim of these experiments was to compare conventional, microscopic methods of evaluating pigeon sperm motility and concentration to those measured by computer-assisted sperm analysis (CASA system). Semen was collected twice a week from two groups of pigeons, each of 40 males (group I: meat-type breed; group II: fancy pigeon) using the lumbo-sacral and cloacal region massage method. Ejaculates collected in each group were diluted 1:100 in BPSE solution and divided into two equal samples. One sample was examined subjectively by microscope and the second one was analysed using CASA system. The sperm concentration was measured by CASA using the anti-collision (AC) system and fluorescent staining (IDENT). There were not any significant differences between the methods of evaluation of sperm concentration. High positive correlations in both groups were observed between the sperm concentration estimated by Thom counting chamber and AC (r=0.87 and r=0.91, respectively), and between the sperm concentration evaluated by Thom counting chamber and IDENT (r=0.85 and r=0.90, respectively). The mean values for CASA measurement of proportion of motile spermatozoa (MOT) and progressive movement (PMOT) were significantly lower than the values estimated subjectively in both groups of pigeons (p< or =0.05 and p< or =0.01, respectively). Positive correlations in MOT and PMOT were noted between both methods of evaluation. The CASA system is very rapid, objective and sensitive method in detecting subtle motility characteristics as well as sperm concentration and is recommended for future research into pigeon semen.
Optical efficiency of solar concentrators by a reverse optical path method.
Parretta, A; Antonini, A; Milan, E; Stefancich, M; Martinelli, G; Armani, M
2008-09-15
A method for the optical characterization of a solar concentrator, based on the reverse illumination by a Lambertian source and measurement of intensity of light projected on a far screen, has been developed. It is shown that the projected light intensity is simply correlated to the angle-resolved efficiency of a concentrator, conventionally obtained by a direct illumination procedure. The method has been applied by simulating simple reflective nonimaging and Fresnel lens concentrators.
NASA Astrophysics Data System (ADS)
Hosani, E. Al; Zhang, M.; Abascal, J. F. P. J.; Soleimani, M.
2016-11-01
Electrical capacitance tomography (ECT) is an imaging technology used to reconstruct the permittivity distribution within the sensing region. So far, ECT has been primarily used to image non-conductive media only, since if the conductivity of the imaged object is high, the capacitance measuring circuit will be almost shortened by the conductivity path and a clear image cannot be produced using the standard image reconstruction approaches. This paper tackles the problem of imaging metallic samples using conventional ECT systems by investigating the two main aspects of image reconstruction algorithms, namely the forward problem and the inverse problem. For the forward problem, two different methods to model the region of high conductivity in ECT is presented. On the other hand, for the inverse problem, three different algorithms to reconstruct the high contrast images are examined. The first two methods are the linear single step Tikhonov method and the iterative total variation regularization method, and use two sets of ECT data to reconstruct the image in time difference mode. The third method, namely the level set method, uses absolute ECT measurements and was developed using a metallic forward model. The results indicate that the applications of conventional ECT systems can be extended to metal samples using the suggested algorithms and forward model, especially using a level set algorithm to find the boundary of the metal.
Parikh, Harshal R; De, Anuradha S; Baveja, Sujata M
2012-07-01
Physicians and microbiologists have long recognized that the presence of living microorganisms in the blood of a patient carries with it considerable morbidity and mortality. Hence, blood cultures have become critically important and frequently performed test in clinical microbiology laboratories for diagnosis of sepsis. To compare the conventional blood culture method with the lysis centrifugation method in cases of sepsis. Two hundred nonduplicate blood cultures from cases of sepsis were analyzed using two blood culture methods concurrently for recovery of bacteria from patients diagnosed clinically with sepsis - the conventional blood culture method using trypticase soy broth and the lysis centrifugation method using saponin by centrifuging at 3000 g for 30 minutes. Overall bacteria recovered from 200 blood cultures were 17.5%. The conventional blood culture method had a higher yield of organisms, especially Gram positive cocci. The lysis centrifugation method was comparable with the former method with respect to Gram negative bacilli. The sensitivity of lysis centrifugation method in comparison to conventional blood culture method was 49.75% in this study, specificity was 98.21% and diagnostic accuracy was 89.5%. In almost every instance, the time required for detection of the growth was earlier by lysis centrifugation method, which was statistically significant. Contamination by lysis centrifugation was minimal, while that by conventional method was high. Time to growth by the lysis centrifugation method was highly significant (P value 0.000) as compared to time to growth by the conventional blood culture method. For the diagnosis of sepsis, combination of the lysis centrifugation method and the conventional blood culture method with trypticase soy broth or biphasic media is advocable, in order to achieve faster recovery and a better yield of microorganisms.
Supervised exercises for adults with acute lateral ankle sprain: a randomised controlled trial
van Rijn, Rogier M; van Os, Anton G; Kleinrensink, Gert-Jan; Bernsen, Roos MD; Verhaar, Jan AN; Koes, Bart W; Bierma-Zeinstra, Sita MA
2007-01-01
Background During the recovery period after acute ankle sprain, it is unclear whether conventional treatment should be supported by supervised exercise. Aim To evaluate the short- and long-term effectiveness of conventional treatment combined with supervised exercises compared with conventional treatment alone in patients with an acute ankle sprain. Design Randomised controlled clinical trial. Setting A total of 32 Dutch general practices and the hospital emergency department. Method Adults with an acute lateral ankle sprain consulting general practices or the hospital emergency department were allocated to either conventional treatment combined with supervised exercises or conventional treatment alone. Primary outcomes were subjective recovery (0–10 point scale) and the occurrence of a re-sprain. Measurements were carried out at intake, 4 weeks, 8 weeks, 3 months, and 1 year after injury. Data were analysed using intention-to-treat analyses. Results A total of 102 patients were enrolled and randomised to either conventional treatment alone or conventional treatment combined with supervised exercise. There was no significant difference between treatment groups concerning subjective recovery or occurrence of re-sprains after 3 months and 1-year of follow-up. Conclusion Conventional treatment combined with supervised exercises compared to conventional treatment alone during the first year after an acute lateral ankle sprain does not lead to differences in the occurrence of re-sprains or in subjective recovery. PMID:17925136
Kume, Teruyoshi; Kim, Byeong-Keuk; Waseda, Katsuhisa; Sathyanarayana, Shashidhar; Li, Wenguang; Teo, Tat-Jin; Yock, Paul G; Fitzgerald, Peter J; Honda, Yasuhiro
2013-02-01
The aim of this study was to evaluate a new fully automated lumen border tracing system based on a novel multifrequency processing algorithm. We developed the multifrequency processing method to enhance arterial lumen detection by exploiting the differential scattering characteristics of blood and arterial tissue. The implementation of the method can be integrated into current intravascular ultrasound (IVUS) hardware. This study was performed in vivo with conventional 40-MHz IVUS catheters (Atlantis SR Pro™, Boston Scientific Corp, Natick, MA) in 43 clinical patients with coronary artery disease. A total of 522 frames were randomly selected, and lumen areas were measured after automatically tracing lumen borders with the new tracing system and a commercially available tracing system (TraceAssist™) referred to as the "conventional tracing system." The data assessed by the two automated systems were compared with the results of manual tracings by experienced IVUS analysts. New automated lumen measurements showed better agreement with manual lumen area tracings compared with those of the conventional tracing system (correlation coefficient: 0.819 vs. 0.509). When compared against manual tracings, the new algorithm also demonstrated improved systematic error (mean difference: 0.13 vs. -1.02 mm(2) ) and random variability (standard deviation of difference: 2.21 vs. 4.02 mm(2) ) compared with the conventional tracing system. This preliminary study showed that the novel fully automated tracing system based on the multifrequency processing algorithm can provide more accurate lumen border detection than current automated tracing systems and thus, offer a more reliable quantitative evaluation of lumen geometry. Copyright © 2011 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Nagihara, Seiichi; Hedlund, Magnus; Zacny, Kris; Taylor, Patrick T.
2014-03-01
The needle probe method (also known as the ‘hot wire’ or ‘line heat source’ method) is widely used for in-situ thermal conductivity measurements on terrestrial soils and marine sediments. Variants of this method have also been used (or planned) for measuring regolith on the surfaces of extra-terrestrial bodies (e.g., the Moon, Mars, and comets). In the near-vacuum condition on the lunar and planetary surfaces, the measurement method used on the earth cannot be simply duplicated, because thermal conductivity of the regolith can be ~2 orders of magnitude lower. In addition, the planetary probes have much greater diameters, due to engineering requirements associated with the robotic deployment on extra-terrestrial bodies. All of these factors contribute to the planetary probes requiring a much longer time of measurement, several tens of (if not over a hundred) hours, while a conventional terrestrial needle probe needs only 1 to 2 min. The long measurement time complicates the surface operation logistics of the lander. It also negatively affects accuracy of the thermal conductivity measurement, because the cumulative heat loss along the probe is no longer negligible. The present study improves the data reduction algorithm of the needle probe method by shortening the measurement time on planetary surfaces by an order of magnitude. The main difference between the new scheme and the conventional one is that the former uses the exact mathematical solution to the thermal model on which the needle probe measurement theory is based, while the latter uses an approximate solution that is valid only for large times. The present study demonstrates the benefit of the new data reduction technique by applying it to data from a series of needle probe experiments carried out in a vacuum chamber on a lunar regolith simulant, JSC-1A. The use of the exact solution has some disadvantage, however, in requiring three additional parameters, but two of them (the diameter and the volumetric heat capacity of the probe) can be measured and the other (the volumetric heat capacity of the regolith/stimulant) may be estimated from the surface geologic observation and temperature measurements. Therefore, overall, the new data reduction scheme would make in-situ thermal conductivity measurement more practical on planetary missions.
NASA Technical Reports Server (NTRS)
Nagihara, S.; Hedlund, M.; Zacny, K.; Taylor, P. T.
2013-01-01
The needle probe method (also known as the' hot wire' or 'line heat source' method) is widely used for in-situ thermal conductivity measurements on soils and marine sediments on the earth. Variants of this method have also been used (or planned) for measuring regolith on the surfaces of extra-terrestrial bodies (e.g., the Moon, Mars, and comets). In the near-vacuum condition on the lunar and planetary surfaces, the measurement method used on the earth cannot be simply duplicated, because thermal conductivity of the regolith can be approximately 2 orders of magnitude lower. In addition, the planetary probes have much greater diameters, due to engineering requirements associated with the robotic deployment on extra-terrestrial bodies. All of these factors contribute to the planetary probes requiring much longer time of measurement, several tens of (if not over a hundred) hours, while a conventional terrestrial needle probe needs only 1 to 2 minutes. The long measurement time complicates the surface operation logistics of the lander. It also negatively affects accuracy of the thermal conductivity measurement, because the cumulative heat loss along the probe is no longer negligible. The present study improves the data reduction algorithm of the needle probe method by shortening the measurement time on planetary surfaces by an order of magnitude. The main difference between the new scheme and the conventional one is that the former uses the exact mathematical solution to the thermal model on which the needle probe measurement theory is based, while the latter uses an approximate solution that is valid only for large times. The present study demonstrates the benefit of the new data reduction technique by applying it to data from a series of needle probe experiments carried out in a vacuum chamber on JSC-1A lunar regolith stimulant. The use of the exact solution has some disadvantage, however, in requiring three additional parameters, but two of them (the diameter and the volumetric heat capacity of the probe) can be measured and the other (the volumetric heat capacity of the regolith/stimulant) may be estimated from the surface geologic observation and temperature measurements. Therefore, overall, the new data reduction scheme would make in-situ thermal conductivity measurement more practical on planetary missions.
Shavit, Itai; Brant, Rollin; Nijssen-Jordan, Cheri; Galbraith, Roger; Johnson, David W
2006-12-01
Assessment of dehydration in young children currently depends on clinical judgment, which is relatively inaccurate. By using digital videography, we developed a way to assess capillary-refill time more objectively. Our goal was to determine whether digitally measured capillary-refill time assesses the presence of significant dehydration (> or = 5%) in young children with gastroenteritis more accurately than conventional capillary refill and overall clinical assessment. We prospectively enrolled children with gastroenteritis, 1 month to 5 years of age, who were evaluated in a tertiary-care pediatric emergency department and judged by a triage nurse to be at least mildly dehydrated. Before any treatment, we measured the weight and digitally measured capillary-refill time of these children. Pediatric emergency physicians determined capillary-refill time by using conventional methods and degree of dehydration by overall clinical assessment by using a 7-point Likert scale. Postillness weight gain was used to estimate fluid deficit; beginning 48 hours after assessment, children were reweighed every 24 hours until 2 sequential weights differed by no more than 2%. We compared the accuracy of digitally measured capillary-refill time with conventional capillary refill and overall clinical assessment by determining sensitivities, specificities, likelihood ratios, and area under the receiver operator characteristic curves. A total of 83 patients were enrolled and had complete follow-up; 13 of these patients had significant dehydration (> or = 5% of body weight). The area under the receiver operator characteristic curves for digitally measured capillary-refill time and overall clinical assessment relative to fluid deficit (< 5% vs > or = 5%) were 0.99 and 0.88, respectively. Positive likelihood ratios were 11.7 for digitally measured capillary-refill time, 4.5 for conventional capillary refill, and 4.1 for overall clinical assessment. Results of this prospective cohort study suggest that digitally measured capillary-refill time more accurately predicts significant dehydration (> or = 5%) in young children with gastroenteritis than overall clinical assessment.
Polechoński, Jacek; Mynarski, Władysław; Nawrocka, Agnieszka
2015-11-01
[Purpose] The objective of this study was to evaluate the usefulness of pedometry and accelerometry in the measurement of the energy expenditures in Nordic walking and conventional walking as diagnostic parameters. [Subjects and Methods] The study included 20 female students (age, 24 ± 2.3 years). The study used three types of measuring devices, namely a heart rate monitor (Polar S610i), a Caltrac accelerometer, and a pedometer (Yamax SW-800). The walking pace at the level of 110 steps/min was determined by using a metronome. [Results] The students who walked with poles covered a distance of 1,000 m at a speed 36.3 sec faster and with 65.5 fewer steps than in conventional walking. Correlation analysis revealed a moderate interrelationship between the results obtained with a pedometer and those obtained with an accelerometer during Nordic walking (r = 0.55) and a high correlation during conventional walking (r = 0.85). [Conclusion] A pedometer and Caltrac accelerometer should not be used as alternative measurement instruments in the comparison of energy expenditure in Nordic walking.
Polechoński, Jacek; Mynarski, Władysław; Nawrocka, Agnieszka
2015-01-01
[Purpose] The objective of this study was to evaluate the usefulness of pedometry and accelerometry in the measurement of the energy expenditures in Nordic walking and conventional walking as diagnostic parameters. [Subjects and Methods] The study included 20 female students (age, 24 ± 2.3 years). The study used three types of measuring devices, namely a heart rate monitor (Polar S610i), a Caltrac accelerometer, and a pedometer (Yamax SW-800). The walking pace at the level of 110 steps/min was determined by using a metronome. [Results] The students who walked with poles covered a distance of 1,000 m at a speed 36.3 sec faster and with 65.5 fewer steps than in conventional walking. Correlation analysis revealed a moderate interrelationship between the results obtained with a pedometer and those obtained with an accelerometer during Nordic walking (r = 0.55) and a high correlation during conventional walking (r = 0.85). [Conclusion] A pedometer and Caltrac accelerometer should not be used as alternative measurement instruments in the comparison of energy expenditure in Nordic walking. PMID:26696730
Terahertz time-domain spectroscopy of edible oils
Valchev, Dimitar G.
2017-01-01
Chemical degradation of edible oils has been studied using conventional spectroscopic methods spanning the spectrum from ultraviolet to mid-IR. However, the possibility of morphological changes of oil molecules that can be detected at terahertz frequencies is beginning to receive some attention. Furthermore, the rapidly decreasing cost of this technology and its capability for convenient, in situ measurement of material properties, raises the possibility of monitoring oil during cooking and processing at production facilities, and more generally within the food industry. In this paper, we test the hypothesis that oil undergoes chemical and physical changes when heated above the smoke point, which can be detected in the 0.05–2 THz spectral range, measured using the conventional terahertz time-domain spectroscopy technique. The measurements demonstrate a null result in that there is no significant change in the spectra of terahertz optical parameters after heating above the smoke point for 5 min. PMID:28680681
Terahertz time-domain spectroscopy of edible oils
NASA Astrophysics Data System (ADS)
Dinovitser, Alex; Valchev, Dimitar G.; Abbott, Derek
2017-06-01
Chemical degradation of edible oils has been studied using conventional spectroscopic methods spanning the spectrum from ultraviolet to mid-IR. However, the possibility of morphological changes of oil molecules that can be detected at terahertz frequencies is beginning to receive some attention. Furthermore, the rapidly decreasing cost of this technology and its capability for convenient, in situ measurement of material properties, raises the possibility of monitoring oil during cooking and processing at production facilities, and more generally within the food industry. In this paper, we test the hypothesis that oil undergoes chemical and physical changes when heated above the smoke point, which can be detected in the 0.05-2 THz spectral range, measured using the conventional terahertz time-domain spectroscopy technique. The measurements demonstrate a null result in that there is no significant change in the spectra of terahertz optical parameters after heating above the smoke point for 5 min.
Terahertz time-domain spectroscopy of edible oils.
Dinovitser, Alex; Valchev, Dimitar G; Abbott, Derek
2017-06-01
Chemical degradation of edible oils has been studied using conventional spectroscopic methods spanning the spectrum from ultraviolet to mid-IR. However, the possibility of morphological changes of oil molecules that can be detected at terahertz frequencies is beginning to receive some attention. Furthermore, the rapidly decreasing cost of this technology and its capability for convenient, in situ measurement of material properties, raises the possibility of monitoring oil during cooking and processing at production facilities, and more generally within the food industry. In this paper, we test the hypothesis that oil undergoes chemical and physical changes when heated above the smoke point, which can be detected in the 0.05-2 THz spectral range, measured using the conventional terahertz time-domain spectroscopy technique. The measurements demonstrate a null result in that there is no significant change in the spectra of terahertz optical parameters after heating above the smoke point for 5 min.
Automated quantification of pancreatic β-cell mass
Golson, Maria L.; Bush, William S.
2014-01-01
β-Cell mass is a parameter commonly measured in studies of islet biology and diabetes. However, the rigorous quantification of pancreatic β-cell mass using conventional histological methods is a time-consuming process. Rapidly evolving virtual slide technology with high-resolution slide scanners and newly developed image analysis tools has the potential to transform β-cell mass measurement. To test the effectiveness and accuracy of this new approach, we assessed pancreata from normal C57Bl/6J mice and from mouse models of β-cell ablation (streptozotocin-treated mice) and β-cell hyperplasia (leptin-deficient mice), using a standardized systematic sampling of pancreatic specimens. Our data indicate that automated analysis of virtual pancreatic slides is highly reliable and yields results consistent with those obtained by conventional morphometric analysis. This new methodology will allow investigators to dramatically reduce the time required for β-cell mass measurement by automating high-resolution image capture and analysis of entire pancreatic sections. PMID:24760991
Prediction of crude protein and oil content of soybeans using Raman spectroscopy
USDA-ARS?s Scientific Manuscript database
While conventional chemical analysis methods for food nutrients require time-consuming, labor-intensive, and invasive pretreatment procedures, Raman spectroscopy can be used to measure a variety of food components rapidly and non-destructively and does not require supervision from experts. The purpo...
Effects of field storage method on E. coli concentrations measured in storm water runoff
USDA-ARS?s Scientific Manuscript database
Storm water runoff is increasingly assessed for fecal indicator organisms (e.g., Escherichia coli, E. coli) and its impact on contact recreation. Concurrently, use of autosamplers along with logistic, economic, technical, and personnel barriers are challenging conventional protocols for sample hold...
Miyajima, Saori; Tanaka, Takayuki; Imamura, Yumeko; Kusaka, Takashi
2015-01-01
We estimate lumbar torque based on motion measurement using only three inertial sensors. First, human motion is measured by a 6-axis motion tracking device that combines a 3-axis accelerometer and a 3-axis gyroscope placed on the shank, thigh, and back. Next, the lumbar joint torque during the motion is estimated by kinematic musculoskeletal simulation. The conventional method for estimating joint torque uses full body motion data measured by an optical motion capture system. However, in this research, joint torque is estimated by using only three link angles of the body, thigh, and shank. The utility of our method was verified by experiments. We measured motion of bendung knee and waist simultaneously. As the result, we were able to estimate the lumbar joint torque from measured motion.
Wong, M S; Cheng, J C Y; Wong, M W; So, S F
2005-04-01
A study was conducted to compare the CAD/CAM method with the conventional manual method in fabrication of spinal orthoses for patients with adolescent idiopathic scoliosis. Ten subjects were recruited for this study. Efficiency analyses of the two methods were performed from cast filling/ digitization process to completion of cast/image rectification. The dimensional changes of the casts/ models rectified by the two cast rectification methods were also investigated. The results demonstrated that the CAD/CAM method was faster than the conventional manual method in the studied processes. The mean rectification time of the CAD/CAM method was shorter than that of the conventional manual method by 108.3 min (63.5%). This indicated that the CAD/CAM method took about 1/3 of the time of the conventional manual to finish cast rectification. In the comparison of cast/image dimensional differences between the conventional manual method and the CAD/CAM method, five major dimensions in each of the five rectified regions namely the axilla, thoracic, lumbar, abdominal and pelvic regions were involved. There were no significant dimensional differences (p < 0.05) in 19 out of the 25 studied dimensions. This study demonstrated that the CAD/CAM system could save the time in the rectification process and offer a relatively high resemblance in cast rectification as compared with the conventional manual method.
Inter-Slice Blood Flow and Magnetization Transfer Effects as A New Simultaneous Imaging Strategy.
Han, Paul Kyu; Barker, Jeffrey W; Kim, Ki Hwan; Choi, Seung Hong; Bae, Kyongtae Ty; Park, Sung-Hong
2015-01-01
The recent blood flow and magnetization transfer (MT) technique termed alternate ascending/descending directional navigation (ALADDIN) achieves the contrast using interslice blood flow and MT effects with no separate preparation RF pulse, thereby potentially overcoming limitations of conventional methods. In this study, we examined the signal characteristics of ALADDIN as a simultaneous blood flow and MT imaging strategy, by comparing it with pseudo-continuous ASL (pCASL) and conventional MT asymmetry (MTA) methods, all of which had the same bSSFP readout. Bloch-equation simulations and experiments showed ALADDIN perfusion signals increased with flip angle, whereas MTA signals peaked at flip angle around 45°-60°. ALADDIN provided signals comparable to those of pCASL and conventional MTA methods emulating the first, second, and third prior slices of ALADDIN under the same scan conditions, suggesting ALADDIN signals to be superposition of signals from multiple labeling planes. The quantitative cerebral blood flow signals from a modified continuous ASL model overestimated the perfusion signals compared to those measured with a pulsed ASL method. Simultaneous mapping of blood flow, MTA, and MT ratio in the whole brain is feasible with ALADDIN within a clinically reasonable time, which can potentially help diagnosis of various diseases.
Implementation of a novel efficient low cost method in structural health monitoring
NASA Astrophysics Data System (ADS)
Asadi, S.; Sepehry, N.; Shamshirsaz, M.; Vaghasloo, Y. A.
2017-05-01
In active structural health monitoring (SHM) methods, it is necessary to excite the structure with a preselected signal. More studies in the field of active SHM are focused on applying SHM on higher frequency ranges since it is possible to detect smaller damages, using higher excitation frequency. Also, to increase spatial domain of measurements and enhance signal to noise ratio (SNR), the amplitude of excitation signal is usually amplified. These issues become substantial where piezoelectric transducers with relatively high capacitance are used and consequently, need to utilize high power amplifiers becomes predominant. In this paper, a novel method named Step Excitation Method (SEM) is proposed and implemented for Lamb wave and transfer impedance-based SHM for damage detection in structures. Three different types of structure are studied: beam, plate and pipe. The related hardware is designed and fabricated which eliminates high power analog amplifiers and decreases complexity of driver significantly. Spectral Finite Element Method (SFEM) is applied to examine performance of proposed SEM. In proposed method, by determination of impulse response of the system, any input could be applied to the system by both finite element simulations and experiments without need for multiple measurements. The experimental results using SEM are compared with those obtained by conventional direct excitation method for healthy and damaged structures. The results show an improvement of amplitude resolution in damage detection comparing to conventional method which is due to achieving an SNR improvement up to 50%.
2013-01-01
Background: Lack of high-fidelity simultaneous measurements of pressure and flow velocity in the aorta has impeded the direct validation of the water-hammer formula for estimating regional aortic pulse wave velocity (AO-PWV1) and has restricted the study of the change of beat-to-beat AO-PWV1 under varying physiological conditions in man. Methods: Aortic pulse wave velocity was derived using two methods in 15 normotensive subjects: 1) the conventional two-point (foot-to-foot) method (AO-PWV2) and 2) a one-point method (AO-PWV1) in which the pressure velocity-loop (PV-loop) was analyzed based on the water hammer formula using simultaneous measurements of flow velocity (Vm) and pressure (Pm) at the same site in the proximal aorta using a multisensor catheter. AO-PWV1 was calculated from the slope of the linear regression line between Pm and Vm where wave reflection (Pb) was at a minimum in early systole in the PV-loop using the water hammer formula, PWV1 = (Pm/Vm)/ρ, where ρ is the blood density. AO-PWV2 was calculated using the conventional two-point measurement method as the distance/traveling time of the wave between 2 sites for measuring P in the proximal aorta. Beat-to-beat alterations of AO-PWV1 in relationship to aortic pressure and linearity of the initial part of the PV-loop during a Valsalva maneuver were also assessed in one subject. Results: The initial part of the loop became steeper in association with the beat-to-beat increase in diastolic pressure in phase 4 during the Valsalva maneuver. The linearity of the initial part of the PV-loop was maintained consistently during the maneuver. Flow velocity vs. pressure in the proximal aorta was highly linear during early systole, with Pearson’s coefficients ranging from 0.9954 to 0.9998. The average values of AO-PWV1 and AO-PWV2 were 6.3 ± 1.2 and 6.7 ± 1.3 m/s, respectively. The regression line of AO-PWV1 on AO-PWV2 was y = 0.95x + 0.68 (r = 0.93, p <0.001). Conclusion: This study concluded that the water-hammer formula (one-point method) provides a reliable and conventional estimate of beat-to-beat aortic regional pulse wave velocity consistently regardless of the changes in physiological states in human clinically. (*English Translation of J Jpn Coll Angiol 2011; 51: 215-221) PMID:23825494
Measuring Cyclic Error in Laser Heterodyne Interferometers
NASA Technical Reports Server (NTRS)
Ryan, Daniel; Abramovici, Alexander; Zhao, Feng; Dekens, Frank; An, Xin; Azizi, Alireza; Chapsky, Jacob; Halverson, Peter
2010-01-01
An improved method and apparatus have been devised for measuring cyclic errors in the readouts of laser heterodyne interferometers that are configured and operated as displacement gauges. The cyclic errors arise as a consequence of mixing of spurious optical and electrical signals in beam launchers that are subsystems of such interferometers. The conventional approach to measurement of cyclic error involves phase measurements and yields values precise to within about 10 pm over air optical paths at laser wavelengths in the visible and near infrared. The present approach, which involves amplitude measurements instead of phase measurements, yields values precise to about .0.1 microns . about 100 times the precision of the conventional approach. In a displacement gauge of the type of interest here, the laser heterodyne interferometer is used to measure any change in distance along an optical axis between two corner-cube retroreflectors. One of the corner-cube retroreflectors is mounted on a piezoelectric transducer (see figure), which is used to introduce a low-frequency periodic displacement that can be measured by the gauges. The transducer is excited at a frequency of 9 Hz by a triangular waveform to generate a 9-Hz triangular-wave displacement having an amplitude of 25 microns. The displacement gives rise to both amplitude and phase modulation of the heterodyne signals in the gauges. The modulation includes cyclic error components, and the magnitude of the cyclic-error component of the phase modulation is what one needs to measure in order to determine the magnitude of the cyclic displacement error. The precision attainable in the conventional (phase measurement) approach to measuring cyclic error is limited because the phase measurements are af-
Leak Detection and Location of Water Pipes Using Vibration Sensors and Modified ML Prefilter.
Choi, Jihoon; Shin, Joonho; Song, Choonggeun; Han, Suyong; Park, Doo Il
2017-09-13
This paper proposes a new leak detection and location method based on vibration sensors and generalised cross-correlation techniques. Considering the estimation errors of the power spectral densities (PSDs) and the cross-spectral density (CSD), the proposed method employs a modified maximum-likelihood (ML) prefilter with a regularisation factor. We derive a theoretical variance of the time difference estimation error through summation in the discrete-frequency domain, and find the optimal regularisation factor that minimises the theoretical variance in practical water pipe channels. The proposed method is compared with conventional correlation-based techniques via numerical simulations using a water pipe channel model, and it is shown through field measurement that the proposed modified ML prefilter outperforms conventional prefilters for the generalised cross-correlation. In addition, we provide a formula to calculate the leak location using the time difference estimate when different types of pipes are connected.
A fast and accurate frequency estimation algorithm for sinusoidal signal with harmonic components
NASA Astrophysics Data System (ADS)
Hu, Jinghua; Pan, Mengchun; Zeng, Zhidun; Hu, Jiafei; Chen, Dixiang; Tian, Wugang; Zhao, Jianqiang; Du, Qingfa
2016-10-01
Frequency estimation is a fundamental problem in many applications, such as traditional vibration measurement, power system supervision, and microelectromechanical system sensors control. In this paper, a fast and accurate frequency estimation algorithm is proposed to deal with low efficiency problem in traditional methods. The proposed algorithm consists of coarse and fine frequency estimation steps, and we demonstrate that it is more efficient than conventional searching methods to achieve coarse frequency estimation (location peak of FFT amplitude) by applying modified zero-crossing technique. Thus, the proposed estimation algorithm requires less hardware and software sources and can achieve even higher efficiency when the experimental data increase. Experimental results with modulated magnetic signal show that the root mean square error of frequency estimation is below 0.032 Hz with the proposed algorithm, which has lower computational complexity and better global performance than conventional frequency estimation methods.
Denoising time-domain induced polarisation data using wavelet techniques
NASA Astrophysics Data System (ADS)
Deo, Ravin N.; Cull, James P.
2016-05-01
Time-domain induced polarisation (TDIP) methods are routinely used for near-surface evaluations in quasi-urban environments harbouring networks of buried civil infrastructure. A conventional technique for improving signal to noise ratio in such environments is by using analogue or digital low-pass filtering followed by stacking and rectification. However, this induces large distortions in the processed data. In this study, we have conducted the first application of wavelet based denoising techniques for processing raw TDIP data. Our investigation included laboratory and field measurements to better understand the advantages and limitations of this technique. It was found that distortions arising from conventional filtering can be significantly avoided with the use of wavelet based denoising techniques. With recent advances in full-waveform acquisition and analysis, incorporation of wavelet denoising techniques can further enhance surveying capabilities. In this work, we present the rationale for utilising wavelet denoising methods and discuss some important implications, which can positively influence TDIP methods.
Tuning fork enhanced interferometric photoacoustic spectroscopy: a new method for trace gas analysis
NASA Astrophysics Data System (ADS)
Köhring, M.; Pohlkötter, A.; Willer, U.; Angelmahr, M.; Schade, W.
2011-01-01
A photoacoustic trace gas sensor based on an optical read-out method of a quartz tuning fork is shown. Instead of conventional piezoelectric signal read-out, as applied in well-known quartz-enhanced photoacoustic spectroscopy (QEPAS), an interferometric read-out method for measurement of the tuning fork's oscillation is presented. To demonstrate the potential of the optical read-out of tuning forks in photoacoustics, a comparison between the performances of a sensor with interferometric read-out and conventional QEPAS with piezoelectric read-out is reported. The two sensors show similar characteristics. The detection limit (L) for the optical read-out is determined to be L opt=(2598±84) ppm (1 σ) compared to L elec=(2579±78) ppm (1 σ) for piezoelectric read-out. In both cases the detection limit is defined by the thermal noise of the tuning fork.
Leak Detection and Location of Water Pipes Using Vibration Sensors and Modified ML Prefilter
Shin, Joonho; Song, Choonggeun; Han, Suyong; Park, Doo Il
2017-01-01
This paper proposes a new leak detection and location method based on vibration sensors and generalised cross-correlation techniques. Considering the estimation errors of the power spectral densities (PSDs) and the cross-spectral density (CSD), the proposed method employs a modified maximum-likelihood (ML) prefilter with a regularisation factor. We derive a theoretical variance of the time difference estimation error through summation in the discrete-frequency domain, and find the optimal regularisation factor that minimises the theoretical variance in practical water pipe channels. The proposed method is compared with conventional correlation-based techniques via numerical simulations using a water pipe channel model, and it is shown through field measurement that the proposed modified ML prefilter outperforms conventional prefilters for the generalised cross-correlation. In addition, we provide a formula to calculate the leak location using the time difference estimate when different types of pipes are connected. PMID:28902154
Machado, Pedro; Navarro-Compán, Victoria; Landewé, Robert; van Gaalen, Floris A; Roux, Christian; van der Heijde, Désirée
2015-02-01
The Ankylosing Spondylitis Disease Activity Score (ASDAS) is a composite measure of disease activity in axial spondyloarthritis. The aims of this study were to determine the most appropriate method for calculating the ASDAS using the C-reactive protein (CRP) level when the conventional CRP level was below the limit of detection, to determine how low CRP values obtained by high-sensitivity CRP (hsCRP) measurement influence ASDAS-CRP results, and to test agreement between different ASDAS formulae. Patients with axial spondyloarthritis who had a conventional CRP level below the limit of detection (5 mg/liter) were selected (n = 257). The ASDAS–conventional CRP with 11 different imputations for the conventional CRP value (range 0–5 mg/liter, at 0.5-mg/liter intervals) was calculated. The ASDAS-hsCRP and ASDAS using the erythrocyte sedimentation rate (ESR) were also calculated. Agreement between the ASDAS formulae was tested. The ASDAS-hsCRP showed better agreement with the ASDAS-CRP calculated using the conventional CRP imputation values of 1.5 and 2.0 mg/liter and with the ASDAS-ESR than with other imputed formulae. Disagreement occurred mainly in lower disease activity states (inactive/moderate disease activity). When the CRP value was <2 mg/liter, the resulting ASDAS-CRP scores may have been inappropriately low. When the conventional CRP level is below the limit of detection or when the hsCRP level is <2 mg/liter, the constant value of 2 mg/liter should be used to calculate the ASDAS-CRP score. There is good agreement between the ASDAS-hsCRP and ASDAS-ESR; however, formulae are not interchangeable.
A novel spinal kinematic analysis using X-ray imaging and vicon motion analysis: a case study.
Noh, Dong K; Lee, Nam G; You, Joshua H
2014-01-01
This study highlights a novel spinal kinematic analysis method and the feasibility of X-ray imaging measurements to accurately assess thoracic spine motion. The advanced X-ray Nash-Moe method and analysis were used to compute the segmental range of motion in thoracic vertebra pedicles in vivo. This Nash-Moe X-ray imaging method was compared with a standardized method using the Vicon 3-dimensional motion capture system. Linear regression analysis showed an excellent and significant correlation between the two methods (R2 = 0.99, p < 0.05), suggesting that the analysis of spinal segmental range of motion using X-ray imaging measurements was accurate and comparable to the conventional 3-dimensional motion analysis system. Clinically, this novel finding is compelling evidence demonstrating that measurements with X-ray imaging are useful to accurately decipher pathological spinal alignment and movement impairments in idiopathic scoliosis (IS).
Bounded Linear Stability Analysis - A Time Delay Margin Estimation Approach for Adaptive Control
NASA Technical Reports Server (NTRS)
Nguyen, Nhan T.; Ishihara, Abraham K.; Krishnakumar, Kalmanje Srinlvas; Bakhtiari-Nejad, Maryam
2009-01-01
This paper presents a method for estimating time delay margin for model-reference adaptive control of systems with almost linear structured uncertainty. The bounded linear stability analysis method seeks to represent the conventional model-reference adaptive law by a locally bounded linear approximation within a small time window using the comparison lemma. The locally bounded linear approximation of the combined adaptive system is cast in a form of an input-time-delay differential equation over a small time window. The time delay margin of this system represents a local stability measure and is computed analytically by a matrix measure method, which provides a simple analytical technique for estimating an upper bound of time delay margin. Based on simulation results for a scalar model-reference adaptive control system, both the bounded linear stability method and the matrix measure method are seen to provide a reasonably accurate and yet not too conservative time delay margin estimation.
The impact of changing dental needs on cost savings from fluoridation.
Campain, A C; Mariño, R J; Wright, F A C; Harrison, D; Bailey, D L; Morgan, M V
2010-03-01
Although community water fluoridation has been one of the cornerstone strategies for the prevention and control of dental caries, questions are still raised regarding its cost-effectiveness. This study assessed the impact of changing dental needs on the cost savings from community water fluoridation in Australia. Net costs were estimated as Costs((programme)) minus Costs((averted caries).) Averted costs were estimated as the product of caries increment in non-fluoridated community, effectiveness of fluoridation and the cost of a carious surface. Modelling considered four age-cohorts: 6-20, 21-45, 46-65 and 66+ years and three time points 1970s, 1980s, and 1990s. Cost of a carious surface was estimated by conventional and complex methods. Real discount rates (4, 7 (base) and 10%) were utilized. With base-case assumptions, the average annual cost savings/person, using Australian dollars at the 2005 level, ranged from $56.41 (1970s) to $17.75 (1990s) (conventional method) and from $249.45 (1970s) to $69.86 (1990s) (complex method). Under worst-case assumptions fluoridation remained cost-effective with cost savings ranging from $24.15 (1970s) to $3.87 (1990s) (conventional method) and $107.85 (1970s) and $24.53 (1990s) (complex method). For 66+ years cohort (1990s) fluoridation did not show a cost saving, but costs/person were marginal. Community water fluoridation remains a cost-effective preventive measure in Australia.
NASA Technical Reports Server (NTRS)
Olson, William S.
1990-01-01
A physical retrieval method for estimating precipitating water distributions and other geophysical parameters based upon measurements from the DMSP-F8 SSM/I is developed. Three unique features of the retrieval method are (1) sensor antenna patterns are explicitly included to accommodate varying channel resolution; (2) precipitation-brightness temperature relationships are quantified using the cloud ensemble/radiative parameterization; and (3) spatial constraints are imposed for certain background parameters, such as humidity, which vary more slowly in the horizontal than the cloud and precipitation water contents. The general framework of the method will facilitate the incorporation of measurements from the SSMJT, SSM/T-2 and geostationary infrared measurements, as well as information from conventional sources (e.g., radiosondes) or numerical forecast model fields.
An accuracy measurement method for star trackers based on direct astronomic observation
Sun, Ting; Xing, Fei; Wang, Xiaochu; You, Zheng; Chu, Daping
2016-01-01
Star tracker is one of the most promising optical attitude measurement devices and it is widely used in spacecraft for its high accuracy. However, how to realize and verify such an accuracy remains a crucial but unsolved issue until now. The authenticity of the accuracy measurement method of a star tracker will eventually determine the satellite performance. A new and robust accuracy measurement method for a star tracker based on the direct astronomical observation is proposed here. In comparison with the conventional method with simulated stars, this method utilizes real navigation stars as observation targets which makes the measurement results more authoritative and authentic. Transformations between different coordinate systems are conducted on the account of the precision movements of the Earth, and the error curves of directional vectors are obtained along the three axes. Based on error analysis and accuracy definitions, a three-axis accuracy evaluation criterion has been proposed in this paper, which could determine pointing and rolling accuracy of a star tracker directly. Experimental measurements confirm that this method is effective and convenient to implement. Such a measurement environment is close to the in-orbit conditions and it can satisfy the stringent requirement for high-accuracy star trackers. PMID:26948412
An accuracy measurement method for star trackers based on direct astronomic observation.
Sun, Ting; Xing, Fei; Wang, Xiaochu; You, Zheng; Chu, Daping
2016-03-07
Star tracker is one of the most promising optical attitude measurement devices and it is widely used in spacecraft for its high accuracy. However, how to realize and verify such an accuracy remains a crucial but unsolved issue until now. The authenticity of the accuracy measurement method of a star tracker will eventually determine the satellite performance. A new and robust accuracy measurement method for a star tracker based on the direct astronomical observation is proposed here. In comparison with the conventional method with simulated stars, this method utilizes real navigation stars as observation targets which makes the measurement results more authoritative and authentic. Transformations between different coordinate systems are conducted on the account of the precision movements of the Earth, and the error curves of directional vectors are obtained along the three axes. Based on error analysis and accuracy definitions, a three-axis accuracy evaluation criterion has been proposed in this paper, which could determine pointing and rolling accuracy of a star tracker directly. Experimental measurements confirm that this method is effective and convenient to implement. Such a measurement environment is close to the in-orbit conditions and it can satisfy the stringent requirement for high-accuracy star trackers.
Varma, Niraj; Michalski, Justin; Stambler, Bruce; Pavri, Behzad B.
2014-01-01
Aims To test recommended implantable cardioverter defibrillator (ICD) follow-up methods by ‘in-person evaluations’ (IPE) vs. ‘remote Home Monitoring’ (HM). Methods and results ICD patients were randomized 2:1 to automatic HM or to Conventional monitoring, with follow-up checks scheduled at 3, 6, 9, 12, and 15 months post-implant. Conventional patients were evaluated with IPE only. Home Monitoring patients were assessed remotely only for 1 year between 3 and 15 month evaluations. Adherence to follow-up was measured. HM and Conventional patients were similar (age 63 years, 72% male, left ventricular ejection fraction 29%, primary prevention 73%, DDD 57%). Conventional management suffered greater patient attrition during the trial (20.1 vs. 14.2% in HM, P = 0.007). Three month follow-up occurred in 84% in both groups. There was 100% adherence (5 of 5 checks) in 47.3% Conventional vs. 59.7% HM (P < 0.001). Between 3 and 15 months, HM exhibited superior (2.2×) adherence to scheduled follow-up [incidence of failed follow up was 146 of 2421 (6.0%) in HM vs. 145 of 1098 (13.2%) in Conventional, P < 0.001] and punctuality. In HM (daily transmission success rate median 91%), transmission loss caused only 22 of 2275 (0.97%) failed HM evaluations between 3 and 15 months; others resulted from clinic oversight. Overall IPE failure rate in Conventional [193 of 1841 (10.5%) exceeded that in HM [97 of 1484 (6.5%), P < 0.001] by 62%, i.e. HM patients remained more loyal to IPE when this was mandated. Conclusion Automatic remote monitoring better preserves patient retention and adherence to scheduled follow-up compared with IPE. Clinical trial registration NCT00336284. PMID:24595864
An Investigation of a Photographic Technique of Measuring High Surface Temperatures
NASA Technical Reports Server (NTRS)
Siviter, James H., Jr.; Strass, H. Kurt
1960-01-01
A photographic method of temperature determination has been developed to measure elevated temperatures of surfaces. The technique presented herein minimizes calibration procedures and permits wide variation in emulsion developing techniques. The present work indicates that the lower limit of applicability is approximately 1,400 F when conventional cameras, emulsions, and moderate exposures are used. The upper limit is determined by the calibration technique and the accuracy required.