Defining Platelet Function During Polytrauma
2013-02-01
calibrated automated thrombography, 3. Platelet-induced clot contraction and using viscoelastic measures such as TEG with Platelet Mapping™ and, 4. Flow...using calibrated automated thrombography (CAT). 3. Platelet-induced clot contraction and using viscoelastic measures such as TEG with Platelet Mapping...formation (such as Hemodyne’s platelet contractile force measurement and thromboelastrography). The degree to which certain injury patterns as well as
Parsons, Martin Em; O'Connell, Karen; Allen, Seamus; Egan, Karl; Szklanna, Paulina B; McGuigan, Christopher; Ní Áinle, Fionnuala; Maguire, Patricia B
2017-01-01
Thrombin is well recognised for its role in the coagulation cascade but it also plays a role in inflammation, with enhanced thrombin generation observed in several inflammatory disorders. Although patients with multiple sclerosis (MS) have a higher incidence of thrombotic disease, thrombin generation has not been studied to date. The aim of this study was to characterise calibrated automated thrombography parameters in patients with relapsing-remitting MS (RRMS) and primary progressive MS (PPMS) in comparison to healthy controls (HCs). Calibrated automated thrombography was performed on platelet poor plasma from 15 patients with RRMS, 15 with PPMS and 19 HCs. We found that patients with RRMS generate thrombin at a significantly faster rate than the less inflammatory subtype, PPMS or HCs. In addition, the speed of thrombin generation was significantly correlated with time from clinical diagnosis in both subtypes. However, in RRMS the rate of thrombin generation was increased with increased time from clinical diagnosis, while in PPMS the rate of thrombin generation decreased with increased time from clinical diagnosis. These data likely reflect the differential active proinflammatory states in each MS subtype and provide novel mechanistic insights into the clinically relevant prothrombotic state observed in these patients.
Coagulation phenotypes in septic shock as evaluated by calibrated automated thrombography.
Perrin, Julien; Charron, Cyril; François, Jean-Hugues; Cramer-Bordé, Elisabeth; Lévy, Bruno; Borgel, Delphine; Vieillard-Baron, Antoine
2015-01-01
Sepsis induces alterations of coagulation suggesting both hypercoagulable or hypocoagulable features. The result of their combination remains unknown, making it difficult to predict whether one prevails over the other. Thrombin generation tests (TGTs) stand as an interesting tool to establish an integrative phenotype of coagulation. It has been reported that septic patients display a hypocoagulable trait using TGT. However, protein C (PC) system response was not evaluated. We aimed at describing the thrombin generation profile in patients with septic shock under conditions that are sensitive to PC system to evaluate the net results of coagulation abnormalities and to determine whether hypercoagulable or hypocoagulable traits coexist within a given individual. Thrombin generation was studied in plasma from patients presenting with septic shock at diagnosis and 6 h after a conventional therapeutic management using calibrated automated thrombography with or without thrombomodulin (TM) addition. Patients exhibit clear alterations of TGT that present as both consumption-related hypocoagulability (evidenced without TM addition) but also hypercoagulability by decreased sensitivity to the PC system evidenced with TM addition. No difference could be demonstrated between survivors and nonsurvivors at Day 28, but patients who do not respond to therapeutics at 6 h seem to be more hypercoagulable. More importantly, if our results evidence heterogeneity between patients, we show that alterations of coagulation result in an equilibrium in the majority of patients, thus suggesting "normocoagulability"; but, in the presence of a biological imbalance between baseline thrombin generation and sensitivity to TM, the global effect mostly tends toward hypercoagulability. Thus, TGT may help identify distinct biological coagulation phenotypes in the complex alterations induced by sepsis.
Kristensen, Anne F; Kristensen, Søren R; Falkmer, Ursula; Münster, Anna-Marie B; Pedersen, Shona
2018-05-01
The Calibrated Automated Thrombography (CAT) is an in vitro thrombin generation (TG) assay that holds promise as a valuable tool within clinical diagnostics. However, the technique has a considerable analytical variation, and we therefore, investigated the analytical and between-subject variation of CAT systematically. Moreover, we assess the application of an internal standard for normalization to diminish variation. 20 healthy volunteers donated one blood sample which was subsequently centrifuged, aliquoted and stored at -80 °C prior to analysis. The analytical variation was determined on eight runs, where plasma from the same seven volunteers was processed in triplicates, and for the between-subject variation, TG analysis was performed on plasma from all 20 volunteers. The trigger reagents used for the TG assays included both PPP reagent containing 5 pM tissue factor (TF) and PPPlow with 1 pM TF. Plasma, drawn from a single donor, was applied to all plates as an internal standard for each TG analysis, which subsequently was used for normalization. The total analytical variation for TG analysis performed with PPPlow reagent is 3-14% and 9-13% for PPP reagent. This variation can be minimally reduced by using an internal standard but mainly for ETP (endogenous thrombin potential). The between-subject variation is higher when using PPPlow than PPP and this variation is considerable higher than the analytical variation. TG has a rather high inherent analytical variation but considerable lower than the between-subject variation when using PPPlow as reagent.
Cuq, Benoît; Blois, Shauna L; Wood, R Darren; Monteith, Gabrielle; Abrams-Ogg, Anthony C; Bédard, Christian; Wood, Geoffrey A
2018-06-01
Thrombin plays a central role in hemostasis and thrombosis. Calibrated automated thrombography (CAT), a thrombin generation assay, may be a useful test for hemostatic disorders in dogs. To describe CAT results in a group of healthy dogs, and assess preanalytical variables and biological variability. Forty healthy dogs were enrolled. Lag time (Lag), time to peak (ttpeak), peak thrombin generation (peak), and endogenous thrombin potential (ETP) were measured. Direct jugular venipuncture and winged-needle catheter-assisted saphenous venipuncture were used to collect samples from each dog, and results were compared between methods. Sample stability at -80°C was assessed over 12 months in a subset of samples. Biological variability of CAT was assessed via nested ANOVA using samples obtained weekly from a subset of 9 dogs for 4 consecutive weeks. Samples for CAT were stable at -80°C over 12 months of storage. Samples collected via winged-needle catheter venipuncture showed poor repeatability compared to direct venipuncture samples; there was also poor agreement between the 2 sampling methods. Intra-individual variability of CAT parameters was below 25%; inter-individual variability ranged from 36.9% to 78.5%. Measurement of thrombin generation using CAT appears to be repeatable in healthy dogs, and samples are stable for at least 12 months when stored at -80°C. Direct venipuncture sampling is recommended for CAT. Low indices of individuality suggest that subject-based reference intervals are more suitable when interpreting CAT results. © 2018 American Society for Veterinary Clinical Pathology.
Engels, A C; Hoylaerts, M F; Endo, M; Loyen, S; Verbist, G; Manodoro, S; DeKoninck, P; Richter, J; Deprest, J A
2013-02-01
We aimed to demonstrate local thrombin generation by fetal membranes, as well as its ability to generate fibrin from fibrinogen concentrate. Furthermore, we aimed to investigate the efficacy of collagen plugs, soaked with plasma and fibrinogen, to seal iatrogenic fetal membrane defects. Thrombin generation by homogenized fetal membranes was measured by calibrated automated thrombography. To identify the coagulation caused by an iatrogenic membrane defect, we analyzed fibrin formation by optical densitometry, upon various concentrations of fibrinogen. The ability of a collagen plug soaked with fibrinogen and plasma was tested in an ex vivo model for its ability to seal an iatrogenic fetal membrane defect. Fetal membrane homogenates potently induced thrombin generation in amniotic fluid and diluted plasma. Upon the addition of fibrinogen concentrate, potent fibrin formation was triggered. Measured by densiometry, fibrin formation was optimal at 1250 µg/mL fibrinogen in combination with 4% plasma. A collagen plug soaked with fibrinogen and plasma sealed an iatrogenic membrane defect about 35% better than collagen plugs without these additives (P = 0.037). These in vitro experiments suggest that the addition of fibrinogen and plasma may enhance the sealing efficacy of collagen plugs in closing iatrogenic fetal membrane defects. © 2013 John Wiley & Sons, Ltd.
Bagot, C N; Leishman, E; Onyiaodike, C C; Jordan, F; Freeman, D J
2017-09-01
Pregnancy is a hypercoagulable state associated with an increased risk of venous thrombosis, which begins during the first trimester, but the exact time of onset is unknown. Thrombin generation, a laboratory marker of thrombosis risk, increases during normal pregnancy but it is unclear exactly how early this increase occurs. We assessed thrombin generation by Calibrated Automated Thrombography in women undergoing natural cycle in vitro fertilization, who subsequently gave birth at term following a normal pregnancy (n=22). Blood samples were taken just prior to conception and repeated five times during very early pregnancy, up to Day 59 estimated gestation. Mean Endogenous Thrombin Potential (ETP), peak thrombin generation and Velocity Index (VI) increased significantly from pre-pregnancy to Day 43 gestation (p=0.024-0.0004). This change persisted to Day 59 gestation. The mean of the percentage change from baseline, accounting for inter-individual variation, in ETP, peak thrombin and VI increased significantly from pre-pregnancy to Day 32 gestation (p=0.0351-<0.0001) with the mean increase from baseline persisting to Day 59 gestation. Thrombin generation increases significantly during the very early stages of normal pregnancy when compared to the pre-pregnancy state. The increased risk of venous thrombosis therefore likely begins very early in a woman's pregnancy, suggesting that women considered clinically to be at high thrombotic risk should start thromboprophylaxis as early as possible after a positive pregnancy test. Copyright © 2017 Elsevier Ltd. All rights reserved.
Andersson, Helena M.; Arantes, Márcia J.; Crawley, James T. B.; Luken, Brenda M.; Tran, Sinh; Dahlbäck, Björn; Rezende, Suely M.
2010-01-01
Protein S has an established role in the protein C anticoagulant pathway, where it enhances the factor Va (FVa) and factor VIIIa (FVIIIa) inactivating property of activated protein C (APC). Despite its physiological role and clinical importance, the molecular basis of its action is not fully understood. To clarify the mechanism of the protein S interaction with APC, we have constructed and expressed a library of composite or point variants of human protein S, with residue substitutions introduced into the Gla, thrombin-sensitive region (TSR), epidermal growth factor 1 (EGF1), and EGF2 domains. Cofactor activity for APC was evaluated by calibrated automated thrombography (CAT) using protein S–deficient plasma. Of 27 variants tested initially, only one, protein S D95A (within the EGF1 domain), was largely devoid of functional APC cofactor activity. Protein S D95A was, however, γ-carboxylated and bound phospholipids with an apparent dissociation constant (Kdapp) similar to that of wild-type (WT) protein S. In a purified assay using FVa R506Q/R679Q, purified protein S D95A was shown to have greatly reduced ability to enhance APC-induced cleavage of FVa Arg306. It is concluded that residue Asp95 within EGF1 is critical for APC cofactor function of protein S and could define a principal functional interaction site for APC. PMID:20308596
Cardenas, Jessica C.; Owens, A. Phillip; Krishnamurthy, Janakiraman; Sharpless, Norman E.; Whinna, Herbert C.; Church, Frank C.
2011-01-01
Objective Age-associated cellular senescence is thought to promote vascular dysfunction. p16INK4a is a cell cycle inhibitor that promotes senescence and is upregulated during normal aging. In this study, we examine the contribution of p16INK4a overexpression on venous thrombosis. Methods and Results Mice overexpressing p16INK4a were studied with four different vascular injury models: (1) ferric chloride (FeCl3) and (2) Rose Bengal to induce saphenous vein thrombus formation; (3) FeCl3 and vascular ligation to examine thrombus resolution; and (4) LPS administration to initiate inflammation-induced vascular dysfunction. p16INK4a transgenic mice had accelerated occlusion times (13.1 ± 0.4 min) compared to normal controls (19.7 ± 1.1 min) in the FeCl3 model and 12.7 ± 2.0 and 18.6 ± 1.9, respectively in the Rose Bengal model. Moreover, overexpression of p16INK4a delayed thrombus resolution compared to normal controls. In response to LPS treatment, the p16INK4a transgenic mice showed enhanced thrombin generation in plasma-based calibrated automated thrombography (CAT) assays. Finally, bone marrow transplantation studies suggested increased p16INK4a expression in hematopoietic cells contributes to thrombosis, demonstrating a role for p16INK4a expression in venous thrombosis. Conclusions Venous thrombosis is augmented by overexpression of the cellular senescence gene p16INK4a. PMID:21233453
Rubin, Olivier; Delobel, Julien; Prudent, Michel; Lion, Niels; Kohl, Kid; Tucker, Erik I; Tissot, Jean-Daniel; Angelillo-Scherrer, Anne
2013-08-01
Red blood cell-derived microparticles (RMPs) are small phospholipid vesicles shed from RBCs in blood units, where they accumulate during storage. Because microparticles are bioactive, it could be suggested that RMPs are mediators of posttransfusion complications or, on the contrary, constitute a potential hemostatic agent. This study was performed to establish the impact on coagulation of RMPs isolated from blood units. Using calibrated automated thrombography, we investigated whether RMPs affect thrombin generation (TG) in plasma. We found that RMPs were not only able to increase TG in plasma in the presence of a low exogenous tissue factor (TF) concentration, but also to initiate TG in plasma in absence of exogenous TF. TG induced by RMPs in the absence of exogenous TF was neither affected by the presence of blocking anti-TF nor by the absence of Factor (F)VII. It was significantly reduced in plasma deficient in FVIII or F IX and abolished in FII-, FV-, FX-, or FXI-deficient plasma. TG was also totally abolished when anti-XI 01A6 was added in the sample. Finally, neither Western blotting, flow cytometry, nor immunogold labeling allowed the detection of traces of TF antigen. In addition, RMPs did not comprise polyphosphate, an important modulator of coagulation. Taken together, our data show that RMPs have FXI-dependent procoagulant properties and are able to initiate and propagate TG. The anionic surface of RMPs might be the site of FXI-mediated TG amplification and intrinsic tenase and prothrombinase complex assembly. © 2012 American Association of Blood Banks.
Evaluation of procoagulant tissue factor expression in canine hemangiosarcoma cell lines.
Witter, Lauren E; Gruber, Erika J; Lean, Fabian Z X; Stokol, Tracy
2017-01-01
OBJECTIVE To evaluate expression of procoagulant tissue factor (TF) by canine hemangiosarcoma cells in vitro. SAMPLES 4 canine hemangiosarcoma cell lines (SB-HSA [mouse-passaged cutaneous tumor], Emma [primary metastatic brain tumor], and Frog and Dal-1 [primary splenic tumors]) and 1 nonneoplastic canine endothelial cell line (CnAoEC). PROCEDURES TF mRNA and TF antigen expression were evaluated by quantitative real-time PCR assay and flow cytometry, respectively. Thrombin generation was measured in canine plasma and in coagulation factor-replete or specific coagulation factor-deficient human plasma by calibrated automated thrombography. Corn trypsin inhibitor and annexin V were used to examine contributions of contact activation and membrane-bound phosphatidylserine, respectively, to thrombin generation. RESULTS All cell lines expressed TF mRNA and antigen, with significantly greater expression of both products in SB-HSA and Emma cells than in CnAoEC. A greater percentage of SB-HSA cells expressed TF antigen, compared with other hemangiosarcoma cell lines. All hemangiosarcoma cell lines generated significantly more thrombin than did CnAoEC in canine or factor-replete human plasma. Thrombin generation induced by SB-HSA cells was significantly lower in factor VII-deficient plasma than in factor-replete plasma and was abolished in factor X-deficient plasma; residual thrombin generation in factor VII-deficient plasma was abolished by incubation of cells with annexin V. Thrombin generation by SB-HSA cells was unaffected by the addition of corn trypsin inhibitor. CONCLUSIONS AND CLINICAL RELEVANCE Hemangiosarcoma cell lines expressed procoagulant TF in vitro. Further research is needed to determine whether TF can be used as a biomarker for hemostatic dysfunction in dogs with hemangiosarcoma.
Evaluation of procoagulant tissue factor expression in canine hemangiosarcoma cell lines
Witter, Lauren E.; Gruber, Erika J.; Lean, Fabian Z. X.; Stokol, Tracy
2017-01-01
OBJECTIVE To evaluate expression of procoagulant tissue factor (TF) by canine hemangiosarcoma cells in vitro. SAMPLES 4 canine hemangiosarcoma cell lines (SB-HSA [mouse-passaged cutaneous tumor], Emma [primary metastatic brain tumor], and Frog and Dal-1 [primary splenic tumors]) and 1 nonneoplastic canine endothelial cell line (CnAoEC). PROCEDURES TF mRNA and TF antigen expression were evaluated by quantitative real-time PCR assay and flow cytometry, respectively. Thrombin generation was measured in canine plasma and in coagulation factor–replete or specific coagulation factor–deficient human plasma by calibrated automated thrombography. Corn trypsin inhibitor and annexin V were used to examine contributions of contact activation and membrane-bound phosphatidylserine, respectively, to thrombin generation. RESULTS All cell lines expressed TF mRNA and antigen, with significantly greater expression of both products in SB-HSA and Emma cells than in CnAoEC. A greater percentage of SB-HSA cells expressed TF antigen, compared with other hemangiosarcoma cell lines. All hemangiosarcoma cell lines generated significantly more thrombin than did CnAoEC in canine or factor-replete human plasma. Thrombin generation induced by SB-HSA cells was significantly lower in factor VII-deficient plasma than in factor-replete plasma and was abolished in factor X–deficient plasma; residual thrombin generation in FVII-deficient plasma was abolished by incubation of cells with annexin V. Thrombin generation by SB-HSA cells was unaffected by the addition of corn trypsin inhibitor. CONCLUSIONS AND CLINICAL RELEVANCE Hemangiosarcoma cell lines expressed procoagulant TF in vitro. Further research is needed to determine whether TF can be used as a biomarker for hemostatic dysfunction in dogs with hemangiosarcoma. PMID:28029283
Cimenti, Christina; Schlagenhauf, Axel; Leschnik, Bettina; Fröhlich-Reiterer, Elke; Jasser-Nitsche, Hildegard; Haidl, Harald; Suppan, Elisabeth; Weinhandl, Gudrun; Leberl, Maximilian; Borkenstein, Martin; Muntean, Wolfgang E
2016-12-01
Micro- and macrovascular diseases are frequent complications in patients with diabetes. Hypercoagulability may contribute to microvascular alterations. In this study, we investigated whether type 1 diabetes in children is associated with a hypercoagulable state by performing a global function test of coagulation - the thrombin generation assay. 75 patients with type 1 diabetes aged between 2 and 19years were compared to an age-matched healthy control group. Diabetes patients were divided into high-dose and low-dose insulin cohorts with a cut-off at 0.8Ukg -1 d -1 . Measurements were performed with platelet poor plasma using Calibrated Automated Thrombography and 1 pM or 5 pM tissue factor. Additionally, we quantified prothrombin fragments F1+2, thrombin-antithrombin complex, prothrombin, tissue factor pathway inhibitor, and antithrombin. Patients with type 1 diabetes exhibited a significantly shorter of lag time as well as decreased thrombin peak and endogenous thrombin potential compared to control subjects with 5 pM but not with 1 pM tissue factor. In high-dose insulin patients peak thrombin generation was higher and time to peak shorter than in low-dose patients. Thrombin-antithrombin complex was decreased in patients with type 1 diabetes, whereas prothrombin fragments F1+2 was comparable in both groups. Thrombin generation parameters did not correlate with parameters of metabolic control and the duration of diabetes. Taken together, we found only minor changes of thrombin generation in children and adolescents with type 1 diabetes which - in contrast to type 2 diabetes - do not argue for a hypercoagulable state. Copyright © 2016 Elsevier Ltd. All rights reserved.
Kamisato, Chikako; Furugohri, Taketoshi; Morishima, Yoshiyuki
2016-05-01
We have demonstrated that antithrombin (AT)-independent thrombin inhibitors paradoxically increase thrombin generation (TG) in human plasma in a thrombomodulin (TM)- and protein C (PC)-dependent manner. We determined the effects of AT-independent thrombin inhibitors on the negative-feedback system, activation of PC and production and degradation of factor Va (FVa), as possible mechanisms underlying the paradoxical enhancement of TG. TG in human plasma containing 10nM TM was assayed by means of the calibrated automated thrombography. As an index of PC activation, plasma concentration of activated PC-PC inhibitor complex (aPC-PCI) was measured. The amounts of FVa heavy chain and its degradation product (FVa(307-506)) were examined by western blotting. AT-independent thrombin inhibitors, melagatran and dabigatran (both at 25-600nM) and 3-30μg/ml active site-blocked thrombin (IIai), increased peak levels of TG. Melagatran, dabigatran and IIai significantly decreased plasma concentration of aPC-PCI complex at 25nM or more, 75nM or more, and 10 and 30μg/ml, respectively. Melagatran (300nM) significantly increased FVa and decreased FVa(307-506). In contrast, a direct factor Xa inhibitor edoxaban preferentially inhibited thrombin generation (≥25nM), and higher concentrations were required to inhibit PC activation (≥150nM) and FVa degradation (300nM). The present study suggests that the inhibitions of protein C activation and subsequent degradation of FVa and increase in FVa by antithrombin-independent thrombin inhibitors may contribute to the paradoxical TG enhancement, and edoxaban may inhibit PC activation and FVa degradation as a result of TG suppression. Copyright © 2016 Elsevier Ltd. All rights reserved.
Impact of aerobic exercise on haemostatic indices in paediatric patients with haemophilia.
Kumar, Riten; Bouskill, Vanessa; Schneiderman, Jane E; Pluthero, Fred G; Kahr, Walter H A; Craik, Allison; Clark, Dewi; Whitney, Karen; Zhang, Christine; Rand, Margaret L; Carcao, Manuel
2016-06-02
This study investigated the impact of aerobic exercise on laboratory assessments of haemostatic activity in boys (5-18 years of age) with haemophilia A (HA) or B (HB), examining the hypothesis that laboratory coagulation parameters temporarily improve with exercise. Thirty subjects meeting eligibility criteria (19 HA; 11 HB; mean age: 12.8 years) were invited to participate. They underwent a replacement factor washout period and were advised against strenuous activity for three days prior to the planned intervention. At study visit, baseline blood samples were drawn prior to exercise on a stationary cycle ergometer, aiming to attain 3 minutes (min) of cycling at 85 % of predicted maximum heart rate. Blood work was repeated 5 min (t5) and 60 min (t60) post exercise completion. Samples were assessed for platelet count (PC), factor VIII activity ( C), von Willebrand antigen (VWF:Ag), ristocetin cofactor activity (VWF:RCo) and platelet function analysis (PFA-100); maximum rate of thrombus generation (MRTG) in blood was measured via thromboelastography and plasma peak thrombin generation (PTG) via calibrated automated thrombography. Mean duration of exercise was 13.9 (± 2.6) min. On average, t5 samples showed significant elevation, relative to baseline in PC, FVIII:C, VWF:Ag, VWF:RCo and PTG, while C, VWF:Ag, VWF:RCo and MRTG were significantly elevated in t60 samples. Within the cohort, participants with severe HA showed no change in C levels with exercise. The greatest improvement in haemostatic indices was observed in post-adolescent males with mild-moderate HA, who thus represent the group most likely to benefit from a reduction of bleeding risk in the setting of exercise.
Altman, Raul; Scazziota, Alejandra Silvia; Herrera, Maria de Lourdes; Gonzalez, Claudio
2006-01-01
Background Platelet activation is crucial in normal hemostasis. Using a clotting system free of external tissue factor, we investigated whether activated Factor VII in combination with platelet agonists increased thrombin generation (TG) in vitro. Methods and results TG was quantified by time parameters: lag time (LT) and time to peak (TTP), and by amount of TG: peak of TG (PTG) and area under thrombin formation curve after 35 minutes (AUC→35min) in plasma from 29 healthy volunteers using the calibrated automated thrombography (CAT) technique. TG parameters were measured at basal conditions and after platelet stimulation by sodium arachidonate (AA), ADP, and collagen (Col). In addition, the effects of recombinant activated FVII (rFVIIa) alone or combined with the other platelet agonists on TG parameters were investigated. We found that LT and TTP were significantly decreased (p < 0.05) and PTG and AUC→35min were significantly increased (p < 0.05) in platelet rich plasma activated with AA, ADP, Col, and rFVIIa compared to non-activated platelet rich plasma from normal subjects (p = 0.01). Furthermore platelet rich plasma activated by the combined effects of rFVIIa plus AA, ADP or Col had significantly reduced LT and TTP and increased AUC→35min (but not PTG) when compared to platelet rich plasma activated with agonists in the absence of rFVIIa. Conclusion Platelets activated by AA, ADP, Col or rFVIIa triggered TG. This effect was increased by combining rFVIIa with other agonists. Our intrinsic coagulation system produced a burst in TG independent of external tissue factor activity an apparent hemostatic effect with little thrombotic capacity. Thus we suggest a modification in the cell-based model of hemostasis. PMID:16630353
2010-01-01
Background High concentrations of recombinant activated factor VII (rFVIIa) can stop bleeding in hemophilic patients. However the rFVIIa dose needed for stopping haemhorrage in off-label indications is unknown. Since thrombin is the main hemostatic agent, this study investigated the effect of rFVIIa and tissue factor (TF) on thrombin generation (TG) in vitro. Methods Lag time (LT), time to peak (TTP), peak TG (PTG), and area under the curve after 35 min (AUCo-35 min) with the calibrated automated thrombography was used to evaluate TG. TG was assayed in platelet-rich plasma (PRP) samples from 29 healthy volunteers under basal conditions and after platelet stimulation with 5.0 μg/ml, 2.6 μg/ml, 0.5 μg/ml, 0.25 μg/ml, and 0.125 μg/ml rFVIIa alone and in normal platelet-poor plasma (PPP) samples from 22 healthy volunteers, rFVIIa in combination with various concentrations of TF (5.0, 2.5, 1.25 and 0.5 pM). Results In PRP activated by rFVIIa, there was a statistically significant increase in TG compared to basal values. A significant TF dose-dependent shortening of LT and increased PTG and AUCo→35 min were obtained in PPP. The addition of rFVIIa increased the effect of TF in shorting the LT and increasing the AUCo→35 min with no effect on PTG but were independent of rFVIIa concentration. Conclusion Low concentrations of rFVIIa were sufficient to form enough thrombin in normal PRP or in PPP when combined with TF, and suggest low concentrations for normalizing hemostasis in off-label indications. PMID:20444280
Individual differences in the calibration of trust in automation.
Pop, Vlad L; Shrewsbury, Alex; Durso, Francis T
2015-06-01
The objective was to determine whether operators with an expectancy that automation is trustworthy are better at calibrating their trust to changes in the capabilities of automation, and if so, why. Studies suggest that individual differences in automation expectancy may be able to account for why changes in the capabilities of automation lead to a substantial change in trust for some, yet only a small change for others. In a baggage screening task, 225 participants searched for weapons in 200 X-ray images of luggage. Participants were assisted by an automated decision aid exhibiting different levels of reliability. Measures of expectancy that automation is trustworthy were used in conjunction with subjective measures of trust and perceived reliability to identify individual differences in trust calibration. Operators with high expectancy that automation is trustworthy were more sensitive to changes (both increases and decreases) in automation reliability. This difference was eliminated by manipulating the causal attribution of automation errors. Attributing the cause of automation errors to factors external to the automation fosters an understanding of tasks and situations in which automation differs in reliability and may lead to more appropriate trust. The development of interventions can lead to calibrated trust in automation. © 2014, Human Factors and Ergonomics Society.
Automated Attitude Sensor Calibration: Progress and Plans
NASA Technical Reports Server (NTRS)
Sedlak, Joseph; Hashmall, Joseph
2004-01-01
This paper describes ongoing work a NASA/Goddard Space Flight Center to improve the quality of spacecraft attitude sensor calibration and reduce costs by automating parts of the calibration process. The new calibration software can autonomously preview data quality over a given time span, select a subset of the data for processing, perform the requested calibration, and output a report. This level of automation is currently being implemented for two specific applications: inertial reference unit (IRU) calibration and sensor alignment calibration. The IRU calibration utility makes use of a sequential version of the Davenport algorithm. This utility has been successfully tested with simulated and actual flight data. The alignment calibration is still in the early testing stage. Both utilities will be incorporated into the institutional attitude ground support system.
Automatic multi-camera calibration for deployable positioning systems
NASA Astrophysics Data System (ADS)
Axelsson, Maria; Karlsson, Mikael; Rudner, Staffan
2012-06-01
Surveillance with automated positioning and tracking of subjects and vehicles in 3D is desired in many defence and security applications. Camera systems with stereo or multiple cameras are often used for 3D positioning. In such systems, accurate camera calibration is needed to obtain a reliable 3D position estimate. There is also a need for automated camera calibration to facilitate fast deployment of semi-mobile multi-camera 3D positioning systems. In this paper we investigate a method for automatic calibration of the extrinsic camera parameters (relative camera pose and orientation) of a multi-camera positioning system. It is based on estimation of the essential matrix between each camera pair using the 5-point method for intrinsically calibrated cameras. The method is compared to a manual calibration method using real HD video data from a field trial with a multicamera positioning system. The method is also evaluated on simulated data from a stereo camera model. The results show that the reprojection error of the automated camera calibration method is close to or smaller than the error for the manual calibration method and that the automated calibration method can replace the manual calibration.
ARTIP: Automated Radio Telescope Image Processing Pipeline
NASA Astrophysics Data System (ADS)
Sharma, Ravi; Gyanchandani, Dolly; Kulkarni, Sarang; Gupta, Neeraj; Pathak, Vineet; Pande, Arti; Joshi, Unmesh
2018-02-01
The Automated Radio Telescope Image Processing Pipeline (ARTIP) automates the entire process of flagging, calibrating, and imaging for radio-interferometric data. ARTIP starts with raw data, i.e. a measurement set and goes through multiple stages, such as flux calibration, bandpass calibration, phase calibration, and imaging to generate continuum and spectral line images. Each stage can also be run independently. The pipeline provides continuous feedback to the user through various messages, charts and logs. It is written using standard python libraries and the CASA package. The pipeline can deal with datasets with multiple spectral windows and also multiple target sources which may have arbitrary combinations of flux/bandpass/phase calibrators.
NASA Astrophysics Data System (ADS)
Rausch, Kameron; Houchin, Scott; Cardema, Jason; Moy, Gabriel; Haas, Evan; De Luccia, Frank J.
2013-12-01
National Polar-Orbiting Partnership (S-NPP) Visible Infrared Imaging Radiometer Suite (VIIRS) reflective bands are currently calibrated via weekly updates to look-up tables (LUTs) utilized by operational ground processing in the Joint Polar Satellite System Interface Data Processing Segment (IDPS). The parameters in these LUTs must be predicted ahead 2 weeks and cannot adequately track the dynamically varying response characteristics of the instrument. As a result, spurious "predict-ahead" calibration errors of the order of 0.1% or greater are routinely introduced into the calibrated reflectances and radiances produced by IDPS in sensor data records (SDRs). Spurious calibration errors of this magnitude adversely impact the quality of downstream environmental data records (EDRs) derived from VIIRS SDRs such as Ocean Color/Chlorophyll and cause increased striping and band-to-band radiometric calibration uncertainty of SDR products. A novel algorithm that fully automates reflective band calibration has been developed for implementation in IDPS in late 2013. Automating the reflective solar band (RSB) calibration is extremely challenging and represents a significant advancement over the manner in which RSB calibration has traditionally been performed in heritage instruments such as the Moderate Resolution Imaging Spectroradiometer. The automated algorithm applies calibration data almost immediately after their acquisition by the instrument from views of space and on-onboard calibration sources, thereby eliminating the predict-ahead errors associated with the current offline calibration process. This new algorithm, when implemented, will significantly improve the quality of VIIRS reflective band SDRs and consequently the quality of EDRs produced from these SDRs.
Automated image quality assessment for chest CT scans.
Reeves, Anthony P; Xie, Yiting; Liu, Shuang
2018-02-01
Medical image quality needs to be maintained at standards sufficient for effective clinical reading. Automated computer analytic methods may be applied to medical images for quality assessment. For chest CT scans in a lung cancer screening context, an automated quality assessment method is presented that characterizes image noise and image intensity calibration. This is achieved by image measurements in three automatically segmented homogeneous regions of the scan: external air, trachea lumen air, and descending aorta blood. Profiles of CT scanner behavior are also computed. The method has been evaluated on both phantom and real low-dose chest CT scans and results show that repeatable noise and calibration measures may be realized by automated computer algorithms. Noise and calibration profiles show relevant differences between different scanners and protocols. Automated image quality assessment may be useful for quality control for lung cancer screening and may enable performance improvements to automated computer analysis methods. © 2017 American Association of Physicists in Medicine.
Mischke, R; Wohlsein, P; Schoon, H-A
2005-01-01
The objective of the study was to examine the alterations of fibrin generation in dogs with haemangiosarcoma using resonance thrombography. The second objective was to evaluate the sensitivity of this method for the detection of hypofibrinogenaemia and/or increased fibrin(ogen) degradation product (FDP) concentration. Resonance thrombogram (RTG) measurements with two different instruments were performed in 30 unselected dogs with haemangiosarcoma, 14 of which had decreased fibrinogen and 28 of which had an increased FDP concentration (p<0.0001). The RTG-reaction time was less sensitive than the fibrin formation time (RTG-f) and fibrin amplitude (RTG-F). The RTG-f and RTG-F indicated reliably a decrease in fibrinogen concentration (sensitivity: 0.93). The sensitivity of detection of increased FDP levels was considerably higher than that of thrombin time. However, false-negative results were found even at FDP concentrations > or =120 mg/l, especially in cases with high fibrinogen level. Both machines showed similar sensitivity. The results of this study indicate that canine haemangiosarcoma is frequently associated with severe alterations of fibrin generation due to low fibrinogen and high FDP levels leading to distinct RTG abnormalities. The global test RTG reacts sensitively to a decreased fibrinogen level whereas its accuracy to detect FDP concentrations occurring under pathophysiological conditions is limited. A significant alteration of fibrin generation induced by FDPs may not occur until the serum FDP concentration exceeds 60 mg/l.
Automated geographic registration and radiometric correction for UAV-based mosaics
NASA Astrophysics Data System (ADS)
Thomasson, J. Alex; Shi, Yeyin; Sima, Chao; Yang, Chenghai; Cope, Dale A.
2017-05-01
Texas A and M University has been operating a large-scale, UAV-based, agricultural remote-sensing research project since 2015. To use UAV-based images in agricultural production, many high-resolution images must be mosaicked together to create an image of an agricultural field. Two key difficulties to science-based utilization of such mosaics are geographic registration and radiometric calibration. In our current research project, image files are taken to the computer laboratory after the flight, and semi-manual pre-processing is implemented on the raw image data, including ortho-mosaicking and radiometric calibration. Ground control points (GCPs) are critical for high-quality geographic registration of images during mosaicking. Applications requiring accurate reflectance data also require radiometric-calibration references so that reflectance values of image objects can be calculated. We have developed a method for automated geographic registration and radiometric correction with targets that are installed semi-permanently at distributed locations around fields. The targets are a combination of black (≍5% reflectance), dark gray (≍20% reflectance), and light gray (≍40% reflectance) sections that provide for a transformation of pixel-value to reflectance in the dynamic range of crop fields. The exact spectral reflectance of each target is known, having been measured with a spectrophotometer. At the time of installation, each target is measured for position with a real-time kinematic GPS receiver to give its precise latitude and longitude. Automated location of the reference targets in the images is required for precise, automated, geographic registration; and automated calculation of the digital-number to reflectance transformation is required for automated radiometric calibration. To validate the system for radiometric calibration, a calibrated UAV-based image mosaic of a field was compared to a calibrated single image from a manned aircraft. Reflectance values in selected zones of each image were strongly linearly related, and the average error of UAV-mosaic reflectances was 3.4% in the red band, 1.9% in the green band, and 1.5% in the blue band. Based on these results, the proposed physical system and automated software for calibrating UAV mosaics show excellent promise.
NASA Astrophysics Data System (ADS)
Golobokov, M.; Danilevich, S.
2018-04-01
In order to assess calibration reliability and automate such assessment, procedures for data collection and simulation study of thermal imager calibration procedure have been elaborated. The existing calibration techniques do not always provide high reliability. A new method for analyzing the existing calibration techniques and developing new efficient ones has been suggested and tested. A type of software has been studied that allows generating instrument calibration reports automatically, monitoring their proper configuration, processing measurement results and assessing instrument validity. The use of such software allows reducing man-hours spent on finalization of calibration data 2 to 5 times and eliminating a whole set of typical operator errors.
40 CFR 1066.215 - Summary of verification and calibration procedures for chassis dynamometers.
Code of Federal Regulations, 2012 CFR
2012-07-01
... ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR POLLUTION CONTROLS VEHICLE-TESTING PROCEDURES Dynamometer... manufacturer instructions and good engineering judgment. (c) Automated dynamometer verifications and... accomplish the verifications and calibrations specified in this subpart. You may use these automated...
40 CFR 1066.215 - Summary of verification and calibration procedures for chassis dynamometers.
Code of Federal Regulations, 2013 CFR
2013-07-01
... ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR POLLUTION CONTROLS VEHICLE-TESTING PROCEDURES Dynamometer... manufacturer instructions and good engineering judgment. (c) Automated dynamometer verifications and... accomplish the verifications and calibrations specified in this subpart. You may use these automated...
Flow through electrode with automated calibration
Szecsody, James E [Richland, WA; Williams, Mark D [Richland, WA; Vermeul, Vince R [Richland, WA
2002-08-20
The present invention is an improved automated flow through electrode liquid monitoring system. The automated system has a sample inlet to a sample pump, a sample outlet from the sample pump to at least one flow through electrode with a waste port. At least one computer controls the sample pump and records data from the at least one flow through electrode for a liquid sample. The improvement relies upon (a) at least one source of a calibration sample connected to (b) an injection valve connected to said sample outlet and connected to said source, said injection valve further connected to said at least one flow through electrode, wherein said injection valve is controlled by said computer to select between said liquid sample or said calibration sample. Advantages include improved accuracy because of more frequent calibrations, no additional labor for calibration, no need to remove the flow through electrode(s), and minimal interruption of sampling.
Automated response matching for organic scintillation detector arrays
NASA Astrophysics Data System (ADS)
Aspinall, M. D.; Joyce, M. J.; Cave, F. D.; Plenteda, R.; Tomanin, A.
2017-07-01
This paper identifies a digitizer technology with unique features that facilitates feedback control for the realization of a software-based technique for automatically calibrating detector responses. Three such auto-calibration techniques have been developed and are described along with an explanation of the main configuration settings and potential pitfalls. Automating this process increases repeatability, simplifies user operation, enables remote and periodic system calibration where consistency across detectors' responses are critical.
van der Laak, Jeroen A W M; Dijkman, Henry B P M; Pahlplatz, Martin M M
2006-03-01
The magnification factor in transmission electron microscopy is not very precise, hampering for instance quantitative analysis of specimens. Calibration of the magnification is usually performed interactively using replica specimens, containing line or grating patterns with known spacing. In the present study, a procedure is described for automated magnification calibration using digital images of a line replica. This procedure is based on analysis of the power spectrum of Fourier transformed replica images, and is compared to interactive measurement in the same images. Images were used with magnification ranging from 1,000 x to 200,000 x. The automated procedure deviated on average 0.10% from interactive measurements. Especially for catalase replicas, the coefficient of variation of automated measurement was considerably smaller (average 0.28%) compared to that of interactive measurement (average 3.5%). In conclusion, calibration of the magnification in digital images from transmission electron microscopy may be performed automatically, using the procedure presented here, with high precision and accuracy.
Automated Heat-Flux-Calibration Facility
NASA Technical Reports Server (NTRS)
Liebert, Curt H.; Weikle, Donald H.
1989-01-01
Computer control speeds operation of equipment and processing of measurements. New heat-flux-calibration facility developed at Lewis Research Center. Used for fast-transient heat-transfer testing, durability testing, and calibration of heat-flux gauges. Calibrations performed at constant or transient heat fluxes ranging from 1 to 6 MW/m2 and at temperatures ranging from 80 K to melting temperatures of most materials. Facility developed because there is need to build and calibrate very-small heat-flux gauges for Space Shuttle main engine (SSME).Includes lamp head attached to side of service module, an argon-gas-recirculation module, reflector, heat exchanger, and high-speed positioning system. This type of automated heat-flux calibration facility installed in industrial plants for onsite calibration of heat-flux gauges measuring fluxes of heat in advanced gas-turbine and rocket engines.
An Automated Thermocouple Calibration System
NASA Technical Reports Server (NTRS)
Bethea, Mark D.; Rosenthal, Bruce N.
1992-01-01
An Automated Thermocouple Calibration System (ATCS) was developed for the unattended calibration of type K thermocouples. This system operates from room temperature to 650 C and has been used for calibration of thermocouples in an eight-zone furnace system which may employ as many as 60 thermocouples simultaneously. It is highly efficient, allowing for the calibration of large numbers of thermocouples in significantly less time than required for manual calibrations. The system consists of a personal computer, a data acquisition/control unit, and a laboratory calibration furnace. The calibration furnace is a microprocessor-controlled multipurpose temperature calibrator with an accuracy of +/- 0.7 C. The accuracy of the calibration furnace is traceable to the National Institute of Standards and Technology (NIST). The computer software is menu-based to give the user flexibility and ease of use. The user needs no programming experience to operate the systems. This system was specifically developed for use in the Microgravity Materials Science Laboratory (MMSL) at the NASA LeRC.
Towards the automated reduction and calibration of SCUBA data from the James Clerk Maxwell Telescope
NASA Astrophysics Data System (ADS)
Jenness, T.; Stevens, J. A.; Archibald, E. N.; Economou, F.; Jessop, N. E.; Robson, E. I.
2002-10-01
The Submillimetre Common User Bolometer Array (SCUBA) instrument has been operating on the James Clerk Maxwell Telescope (JCMT) since 1997. The data archive is now sufficiently large that it can be used to investigate instrumental properties and the variability of astronomical sources. This paper describes the automated calibration and reduction scheme used to process the archive data, with particular emphasis on `jiggle-map' observations of compact sources. We demonstrate the validity of our automated approach at both 850 and 450 μm, and apply it to several of the JCMT secondary flux calibrators. We determine light curves for the variable sources IRC +10216 and OH 231.8. This automation is made possible by using the ORAC-DR data reduction pipeline, a flexible and extensible data reduction pipeline that is used on the United Kingdom Infrared Telescope (UKIRT) and the JCMT.
Development of an automated film-reading system for ballistic ranges
NASA Technical Reports Server (NTRS)
Yates, Leslie A.
1992-01-01
Software for an automated film-reading system that uses personal computers and digitized shadowgraphs is described. The software identifies pixels associated with fiducial-line and model images, and least-squares procedures are used to calculate the positions and orientations of the images. Automated position and orientation readings for sphere and cone models are compared to those obtained using a manual film reader. When facility calibration errors are removed from these readings, the accuracy of the automated readings is better than the pixel resolution, and it is equal to, or better than, the manual readings. The effects of film-reading and facility-calibration errors on calculated aerodynamic coefficients is discussed.
Hydrometer calibration by hydrostatic weighing with automated liquid surface positioning
NASA Astrophysics Data System (ADS)
Aguilera, Jesus; Wright, John D.; Bean, Vern E.
2008-01-01
We describe an automated apparatus for calibrating hydrometers by hydrostatic weighing (Cuckow's method) in tridecane, a liquid of known, stable density, and with a relatively low surface tension and contact angle against glass. The apparatus uses a laser light sheet and a laser power meter to position the tridecane surface at the hydrometer scale mark to be calibrated with an uncertainty of 0.08 mm. The calibration results have an expanded uncertainty (with a coverage factor of 2) of 100 parts in 106 or less of the liquid density. We validated the apparatus by comparisons using water, toluene, tridecane and trichloroethylene, and found agreement within 40 parts in 106 or less. The new calibration method is consistent with earlier, manual calibrations performed by NIST. When customers use calibrated hydrometers, they may encounter uncertainties of 370 parts in 106 or larger due to surface tension, contact angle and temperature effects.
NASA Technical Reports Server (NTRS)
Yates, Leslie A.
1992-01-01
Software for an automated film-reading system that uses personal computers and digitized shadowgraphs is described. The software identifies pixels associated with fiducial-line and model images, and least-squares procedures are used to calculate the positions and orientations of the images. Automated position and orientation readings for sphere and cone models are compared to those obtained using a manual film reader. When facility calibration errors are removed from these readings, the accuracy of the automated readings is better than the pixel resolution, and it is equal to, or better than, the manual readings. The effects of film-reading and facility-calibration errors on calculated aerodynamic coefficients is discussed.
NASA Astrophysics Data System (ADS)
Barr, D.; Gilpatrick, J. D.; Martinez, D.; Shurter, R. B.
2004-11-01
The Los Alamos Neutron Science Center (LANSCE) facility at Los Alamos National Laboratory has constructed both an Isotope Production Facility (IPF) and a Switchyard Kicker (XDK) as additions to the H+ and H- accelerator. These additions contain eleven Beam Position Monitors (BPMs) that measure the beam's position throughout the transport. The analog electronics within each processing module determines the beam position using the log-ratio technique. For system reliability, calibrations compensate for various temperature drifts and other imperfections in the processing electronics components. Additionally, verifications are periodically implemented by a PC running a National Instruments LabVIEW virtual instrument (VI) to verify continued system and cable integrity. The VI communicates with the processor cards via a PCI/MXI-3 VXI-crate communication module. Previously, accelerator operators performed BPM system calibrations typically once per day while beam was explicitly turned off. One of this new measurement system's unique achievements is its automated calibration and verification capability. Taking advantage of the pulsed nature of the LANSCE-facility beams, the integrated electronics hardware and VI perform calibration and verification operations between beam pulses without interrupting production beam delivery. The design, construction, and performance results of the automated calibration and verification portion of this position measurement system will be the topic of this paper.
Bessemans, Laurent; Jully, Vanessa; de Raikem, Caroline; Albanese, Mathieu; Moniotte, Nicolas; Silversmet, Pascal; Lemoine, Dominique
2016-01-01
High-throughput screening technologies are increasingly integrated into the formulation development process of biopharmaceuticals. The performance of liquid handling systems is dependent on the ability to deliver accurate and precise volumes of specific reagents to ensure process quality. We have developed an automated gravimetric calibration procedure to adjust the accuracy and evaluate the precision of the TECAN Freedom EVO liquid handling system. Volumes from 3 to 900 µL using calibrated syringes and fixed tips were evaluated with various solutions, including aluminum hydroxide and phosphate adjuvants, β-casein, sucrose, sodium chloride, and phosphate-buffered saline. The methodology to set up liquid class pipetting parameters for each solution was to split the process in three steps: (1) screening of predefined liquid class, including different pipetting parameters; (2) adjustment of accuracy parameters based on a calibration curve; and (3) confirmation of the adjustment. The run of appropriate pipetting scripts, data acquisition, and reports until the creation of a new liquid class in EVOware was fully automated. The calibration and confirmation of the robotic system was simple, efficient, and precise and could accelerate data acquisition for a wide range of biopharmaceutical applications. PMID:26905719
Jeff Cheatham, senior metrologist
2015-01-27
JEFF CHEATHAM, SENIOR METROLOGIST AT THE MARSHALL METROLOGY AND CALIBRATION LABORATORY, SPENT 12 YEARS DEVELOPING 2400 AUTOMATED SOFTWARE PROCEDURES USED FOR CALIBRATION AND TESTING SPACE VEHICLES AND EQUIPMENT
Continuous Calibration of Trust in Automated Systems
2014-01-01
Airlines Flight 214 in San Francisco. Therefore, understanding how users form, lose, and recover trust in imperfect automation is of critical...1997). Misuse and disuse can have fatal consequences; for example, inappropriate automation reliance has been implicated in the recent crash of Asiana
Parameterizations for reducing camera reprojection error for robot-world hand-eye calibration
USDA-ARS?s Scientific Manuscript database
Accurate robot-world, hand-eye calibration is crucial to automation tasks. In this paper, we discuss the robot-world, hand-eye calibration problem which has been modeled as the linear relationship AX equals ZB, where X and Z are the unknown calibration matrices composed of rotation and translation ...
Munksgaard, Niels C; Cheesman, Alexander W; Gray-Spence, Andrew; Cernusak, Lucas A; Bird, Michael I
2018-06-30
Continuous measurement of stable O and H isotope compositions in water vapour requires automated calibration for remote field deployments. We developed a new low-cost device for calibration of both water vapour mole fraction and isotope composition. We coupled a commercially available dew point generator (DPG) to a laser spectrometer and developed hardware for water and air handling along with software for automated operation and data processing. We characterised isotopic fractionation in the DPG, conducted a field test and assessed the influence of critical parameters on the performance of the device. An analysis time of 1 hour was sufficient to achieve memory-free analysis of two water vapour standards and the δ 18 O and δ 2 H values were found to be independent of water vapour concentration over a range of ≈20,000-33,000 ppm. The reproducibility of the standard vapours over a 10-day period was better than 0.14 ‰ and 0.75 ‰ for δ 18 O and δ 2 H values, respectively (1 σ, n = 11) prior to drift correction and calibration. The analytical accuracy was confirmed by the analysis of a third independent vapour standard. The DPG distillation process requires that isotope calibration takes account of DPG temperature, analysis time, injected water volume and air flow rate. The automated calibration system provides high accuracy and precision and is a robust, cost-effective option for long-term field measurements of water vapour isotopes. The necessary modifications to the DPG are minor and easily reversible. Copyright © 2018 John Wiley & Sons, Ltd.
An automated pressure data acquisition system for evaluation of pressure sensitive paint chemistries
NASA Technical Reports Server (NTRS)
Sealey, Bradley S.; Mitchell, Michael; Burkett, Cecil G.; Oglesby, Donald M.
1993-01-01
An automated pressure data acquisition system for testing of pressure sensitive phosphorescent paints was designed, assembled, and tested. The purpose of the calibration system is the evaluation and selection of pressure sensitive paint chemistries that could be used to obtain global aerodynamic pressure distribution measurements. The test apparatus and setup used for pressure sensitive paint characterizations is described. The pressure calibrations, thermal sensitivity effects, and photodegradation properties are discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wood, Jordana R.; Gill, Gary A.; Kuo, Li-Jung
2016-04-20
Trace element determinations in seawater by inductively coupled plasma mass spectrometry are analytically challenging due to the typically very low concentrations of the trace elements and the potential interference of the salt matrix. In this study, we did a comparison for uranium analysis using inductively coupled plasma mass spectrometry (ICP-MS) of Sequim Bay seawater samples and three seawater certified reference materials (SLEW-3, CASS-5 and NASS-6) using seven different analytical approaches. The methods evaluated include: direct analysis, Fe/Pd reductive precipitation, standard addition calibration, online automated dilution using an external calibration with and without matrix matching, and online automated pre-concentration. The methodmore » which produced the most accurate results was the method of standard addition calibration, recovering uranium from a Sequim Bay seawater sample at 101 ± 1.2%. The on-line preconcentration method and the automated dilution with matrix-matched calibration method also performed well. The two least effective methods were the direct analysis and the Fe/Pd reductive precipitation using sodium borohydride« less
Sky camera geometric calibration using solar observations
Urquhart, Bryan; Kurtz, Ben; Kleissl, Jan
2016-09-05
A camera model and associated automated calibration procedure for stationary daytime sky imaging cameras is presented. The specific modeling and calibration needs are motivated by remotely deployed cameras used to forecast solar power production where cameras point skyward and use 180° fisheye lenses. Sun position in the sky and on the image plane provides a simple and automated approach to calibration; special equipment or calibration patterns are not required. Sun position in the sky is modeled using a solar position algorithm (requiring latitude, longitude, altitude and time as inputs). Sun position on the image plane is detected using a simple image processing algorithm. Themore » performance evaluation focuses on the calibration of a camera employing a fisheye lens with an equisolid angle projection, but the camera model is general enough to treat most fixed focal length, central, dioptric camera systems with a photo objective lens. Calibration errors scale with the noise level of the sun position measurement in the image plane, but the calibration is robust across a large range of noise in the sun position. In conclusion, calibration performance on clear days ranged from 0.94 to 1.24 pixels root mean square error.« less
ASTROPOP: ASTROnomical Polarimetry and Photometry pipeline
NASA Astrophysics Data System (ADS)
Campagnolo, Julio C. N.
2018-05-01
AstroPoP reduces almost any CCD photometry and image polarimetry data. For photometry reduction, the code performs source finding, aperture and PSF photometry, astrometry calibration using different automated and non-automated methods and automated source identification and magnitude calibration based on online and local catalogs. For polarimetry, the code resolves linear and circular Stokes parameters produced by image beam splitter or polarizer polarimeters. In addition to the modular functions, ready-to-use pipelines based in configuration files and header keys are also provided with the code. AstroPOP was initially developed to reduce the IAGPOL polarimeter data installed at Observatório Pico dos Dias (Brazil).
Zhu, Tengyi; Fu, Dafang; Jenkinson, Byron; Jafvert, Chad T
2015-04-01
The advective flow of sediment pore water is an important parameter for understanding natural geochemical processes within lake, river, wetland, and marine sediments and also for properly designing permeable remedial sediment caps placed over contaminated sediments. Automated heat pulse seepage meters can be used to measure the vertical component of sediment pore water flow (i.e., vertical Darcy velocity); however, little information on meter calibration as a function of ambient water temperature exists in the literature. As a result, a method with associated equations for calibrating a heat pulse seepage meter as a function of ambient water temperature is fully described in this paper. Results of meter calibration over the temperature range 7.5 to 21.2 °C indicate that errors in accuracy are significant if proper temperature-dependence calibration is not performed. The proposed calibration method allows for temperature corrections to be made automatically in the field at any ambient water temperature. The significance of these corrections is discussed.
Automated UAV-based mapping for airborne reconnaissance and video exploitation
NASA Astrophysics Data System (ADS)
Se, Stephen; Firoozfam, Pezhman; Goldstein, Norman; Wu, Linda; Dutkiewicz, Melanie; Pace, Paul; Naud, J. L. Pierre
2009-05-01
Airborne surveillance and reconnaissance are essential for successful military missions. Such capabilities are critical for force protection, situational awareness, mission planning, damage assessment and others. UAVs gather huge amount of video data but it is extremely labour-intensive for operators to analyse hours and hours of received data. At MDA, we have developed a suite of tools towards automated video exploitation including calibration, visualization, change detection and 3D reconstruction. The on-going work is to improve the robustness of these tools and automate the process as much as possible. Our calibration tool extracts and matches tie-points in the video frames incrementally to recover the camera calibration and poses, which are then refined by bundle adjustment. Our visualization tool stabilizes the video, expands its field-of-view and creates a geo-referenced mosaic from the video frames. It is important to identify anomalies in a scene, which may include detecting any improvised explosive devices (IED). However, it is tedious and difficult to compare video clips to look for differences manually. Our change detection tool allows the user to load two video clips taken from two passes at different times and flags any changes between them. 3D models are useful for situational awareness, as it is easier to understand the scene by visualizing it in 3D. Our 3D reconstruction tool creates calibrated photo-realistic 3D models from video clips taken from different viewpoints, using both semi-automated and automated approaches. The resulting 3D models also allow distance measurements and line-of- sight analysis.
Evaluation of Automated Model Calibration Techniques for Residential Building Energy Simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Robertson, J.; Polly, B.; Collis, J.
2013-09-01
This simulation study adapts and applies the general framework described in BESTEST-EX (Judkoff et al 2010) for self-testing residential building energy model calibration methods. BEopt/DOE-2.2 is used to evaluate four mathematical calibration methods in the context of monthly, daily, and hourly synthetic utility data for a 1960's-era existing home in a cooling-dominated climate. The home's model inputs are assigned probability distributions representing uncertainty ranges, random selections are made from the uncertainty ranges to define 'explicit' input values, and synthetic utility billing data are generated using the explicit input values. The four calibration methods evaluated in this study are: an ASHRAEmore » 1051-RP-based approach (Reddy and Maor 2006), a simplified simulated annealing optimization approach, a regression metamodeling optimization approach, and a simple output ratio calibration approach. The calibration methods are evaluated for monthly, daily, and hourly cases; various retrofit measures are applied to the calibrated models and the methods are evaluated based on the accuracy of predicted savings, computational cost, repeatability, automation, and ease of implementation.« less
Evaluation of Automated Model Calibration Techniques for Residential Building Energy Simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
and Ben Polly, Joseph Robertson; Polly, Ben; Collis, Jon
2013-09-01
This simulation study adapts and applies the general framework described in BESTEST-EX (Judkoff et al 2010) for self-testing residential building energy model calibration methods. BEopt/DOE-2.2 is used to evaluate four mathematical calibration methods in the context of monthly, daily, and hourly synthetic utility data for a 1960's-era existing home in a cooling-dominated climate. The home's model inputs are assigned probability distributions representing uncertainty ranges, random selections are made from the uncertainty ranges to define "explicit" input values, and synthetic utility billing data are generated using the explicit input values. The four calibration methods evaluated in this study are: an ASHRAEmore » 1051-RP-based approach (Reddy and Maor 2006), a simplified simulated annealing optimization approach, a regression metamodeling optimization approach, and a simple output ratio calibration approach. The calibration methods are evaluated for monthly, daily, and hourly cases; various retrofit measures are applied to the calibrated models and the methods are evaluated based on the accuracy of predicted savings, computational cost, repeatability, automation, and ease of implementation.« less
Ground-based automated radiometric calibration system in Baotou site, China
NASA Astrophysics Data System (ADS)
Wang, Ning; Li, Chuanrong; Ma, Lingling; Liu, Yaokai; Meng, Fanrong; Zhao, Yongguang; Pang, Bo; Qian, Yonggang; Li, Wei; Tang, Lingli; Wang, Dongjin
2017-10-01
Post-launch vicarious calibration method, as an important post launch method, not only can be used to evaluate the onboard calibrators but also can be allowed for a traceable knowledge of the absolute accuracy, although it has the drawbacks of low frequency data collections due expensive on personal and cost. To overcome the problems, CEOS Working Group on Calibration and Validation (WGCV) Infrared Visible Optical Sensors (IVOS) subgroup has proposed an Automated Radiative Calibration Network (RadCalNet) project. Baotou site is one of the four demonstration sites of RadCalNet. The superiority characteristics of Baotou site is the combination of various natural scenes and artificial targets. In each artificial target and desert, an automated spectrum measurement instrument is developed to obtain the surface reflected radiance spectra every 2 minutes with a spectrum resolution of 2nm. The aerosol optical thickness and column water vapour content are measured by an automatic sun photometer. To meet the requirement of RadCalNet, a surface reflectance spectrum retrieval method is used to generate the standard input files, with the support of surface and atmospheric measurements. Then the top of atmospheric reflectance spectra are derived from the input files. The results of the demonstration satellites, including Landsat 8, Sentinal-2A, show that there is a good agreement between observed and calculated results.
ERIC Educational Resources Information Center
Zhang, Mo
2013-01-01
Many testing programs use automated scoring to grade essays. One issue in automated essay scoring that has not been examined adequately is population invariance and its causes. The primary purpose of this study was to investigate the impact of sampling in model calibration on population invariance of automated scores. This study analyzed scores…
Multiple-Objective Stepwise Calibration Using Luca
Hay, Lauren E.; Umemoto, Makiko
2007-01-01
This report documents Luca (Let us calibrate), a multiple-objective, stepwise, automated procedure for hydrologic model calibration and the associated graphical user interface (GUI). Luca is a wizard-style user-friendly GUI that provides an easy systematic way of building and executing a calibration procedure. The calibration procedure uses the Shuffled Complex Evolution global search algorithm to calibrate any model compiled with the U.S. Geological Survey's Modular Modeling System. This process assures that intermediate and final states of the model are simulated consistently with measured values.
Optimal Test Design with Rule-Based Item Generation
ERIC Educational Resources Information Center
Geerlings, Hanneke; van der Linden, Wim J.; Glas, Cees A. W.
2013-01-01
Optimal test-design methods are applied to rule-based item generation. Three different cases of automated test design are presented: (a) test assembly from a pool of pregenerated, calibrated items; (b) test generation on the fly from a pool of calibrated item families; and (c) test generation on the fly directly from calibrated features defining…
A simple, accurate, field-portable mixing ratio generator and Rayleigh distillation device
USDA-ARS?s Scientific Manuscript database
Routine field calibration of water vapor analyzers has always been a challenging problem for those making long-term flux measurements at remote sites. Automated sampling of standard gases from compressed tanks, the method of choice for CO2 calibration, cannot be used for H2O. Calibrations are typica...
NASA Astrophysics Data System (ADS)
McCann, C.; Repasky, K. S.; Morin, M.; Lawrence, R. L.; Powell, S. L.
2016-12-01
Compact, cost-effective, flight-based hyperspectral imaging systems can provide scientifically relevant data over large areas for a variety of applications such as ecosystem studies, precision agriculture, and land management. To fully realize this capability, unsupervised classification techniques based on radiometrically-calibrated data that cluster based on biophysical similarity rather than simply spectral similarity are needed. An automated technique to produce high-resolution, large-area, radiometrically-calibrated hyperspectral data sets based on the Landsat surface reflectance data product as a calibration target was developed and applied to three subsequent years of data covering approximately 1850 hectares. The radiometrically-calibrated data allows inter-comparison of the temporal series. Advantages of the radiometric calibration technique include the need for minimal site access, no ancillary instrumentation, and automated processing. Fitting the reflectance spectra of each pixel using a set of biophysically relevant basis functions reduces the data from 80 spectral bands to 9 parameters providing noise reduction and data compression. Examination of histograms of these parameters allows for determination of natural splitting into biophysical similar clusters. This method creates clusters that are similar in terms of biophysical parameters, not simply spectral proximity. Furthermore, this method can be applied to other data sets, such as urban scenes, by developing other physically meaningful basis functions. The ability to use hyperspectral imaging for a variety of important applications requires the development of data processing techniques that can be automated. The radiometric-calibration combined with the histogram based unsupervised classification technique presented here provide one potential avenue for managing big-data associated with hyperspectral imaging.
Automated Mounting Bias Calibration for Airborne LIDAR System
NASA Astrophysics Data System (ADS)
Zhang, J.; Jiang, W.; Jiang, S.
2012-07-01
Mounting bias is the major error source of Airborne LIDAR system. In this paper, an automated calibration method for estimating LIDAR system mounting parameters is introduced. LIDAR direct geo-referencing model is used to calculate systematic errors. Due to LIDAR footprints discretely sampled, the real corresponding laser points are hardly existence among different strips. The traditional corresponding point methodology does not seem to apply to LIDAR strip registration. We proposed a Virtual Corresponding Point Model to resolve the corresponding problem among discrete laser points. Each VCPM contains a corresponding point and three real laser footprints. Two rules are defined to calculate tie point coordinate from real laser footprints. The Scale Invariant Feature Transform (SIFT) is used to extract corresponding points in LIDAR strips, and the automatic flow of LIDAR system calibration based on VCPM is detailed described. The practical examples illustrate the feasibility and effectiveness of the proposed calibration method.
Design of an ultra-portable field transfer radiometer supporting automated vicarious calibration
NASA Astrophysics Data System (ADS)
Anderson, Nikolaus; Thome, Kurtis; Czapla-Myers, Jeffrey; Biggar, Stuart
2015-09-01
The University of Arizona Remote Sensing Group (RSG) began outfitting the radiometric calibration test site (RadCaTS) at Railroad Valley Nevada in 2004 for automated vicarious calibration of Earth-observing sensors. RadCaTS was upgraded to use RSG custom 8-band ground viewing radiometers (GVRs) beginning in 2011 and currently four GVRs are deployed providing an average reflectance for the test site. This measurement of ground reflectance is the most critical component of vicarious calibration using the reflectance-based method. In order to ensure the quality of these measurements, RSG has been exploring more efficient and accurate methods of on-site calibration evaluation. This work describes the design of, and initial results from, a small portable transfer radiometer for the purpose of GVR calibration validation on site. Prior to deployment, RSG uses high accuracy laboratory calibration methods in order to provide radiance calibrations with low uncertainties for each GVR. After deployment, a solar radiation based calibration has typically been used. The method is highly dependent on a clear, stable atmosphere, requires at least two people to perform, is time consuming in post processing, and is dependent on several large pieces of equipment. In order to provide more regular and more accurate calibration monitoring, the small portable transfer radiometer is designed for quick, one-person operation and on-site field calibration comparison results. The radiometer is also suited for laboratory calibration use and thus could be used as a transfer radiometer calibration standard for ground viewing radiometers of a RadCalNet site.
Automated Reduction and Calibration of SCUBA Archive Data Using ORAC-DR
NASA Astrophysics Data System (ADS)
Jenness, T.; Stevens, J. A.; Archibald, E. N.; Economou, F.; Jessop, N.; Robson, E. I.; Tilanus, R. P. J.; Holland, W. S.
The Submillimetre Common User Bolometer Array (SCUBA) instrument has been operating on the James Clerk Maxwell Telescope (JCMT) since 1997. The data archive is now sufficiently large that it can be used for investigating instrumental properties and the variability of astronomical sources. This paper describes the automated calibration and reduction scheme used to process the archive data with particular emphasis on the pointing observations. This is made possible by using the ORAC-DR data reduction pipeline, a flexible and extensible data reduction pipeline that is used on UKIRT and the JCMT.
del Río, Joaquín; Aguzzi, Jacopo; Costa, Corrado; Menesatti, Paolo; Sbragaglia, Valerio; Nogueras, Marc; Sarda, Francesc; Manuèl, Antoni
2013-10-30
Field measurements of the swimming activity rhythms of fishes are scant due to the difficulty of counting individuals at a high frequency over a long period of time. Cabled observatory video monitoring allows such a sampling at a high frequency over unlimited periods of time. Unfortunately, automation for the extraction of biological information (i.e., animals' visual counts per unit of time) is still a major bottleneck. In this study, we describe a new automated video-imaging protocol for the 24-h continuous counting of fishes in colorimetrically calibrated time-lapse photographic outputs, taken by a shallow water (20 m depth) cabled video-platform, the OBSEA. The spectral reflectance value for each patch was measured between 400 to 700 nm and then converted into standard RGB, used as a reference for all subsequent calibrations. All the images were acquired within a standardized Region Of Interest (ROI), represented by a 2 × 2 m methacrylate panel, endowed with a 9-colour calibration chart, and calibrated using the recently implemented "3D Thin-Plate Spline" warping approach in order to numerically define color by its coordinates in n-dimensional space. That operation was repeated on a subset of images, 500 images as a training set, manually selected since acquired under optimum visibility conditions. All images plus those for the training set were ordered together through Principal Component Analysis allowing the selection of 614 images (67.6%) out of 908 as a total corresponding to 18 days (at 30 min frequency). The Roberts operator (used in image processing and computer vision for edge detection) was used to highlights regions of high spatial colour gradient corresponding to fishes' bodies. Time series in manual and visual counts were compared together for efficiency evaluation. Periodogram and waveform analysis outputs provided very similar results, although quantified parameters in relation to the strength of respective rhythms were different. Results indicate that automation efficiency is limited by optimum visibility conditions. Data sets from manual counting present the larger day-night fluctuations in comparison to those derived from automation. This comparison indicates that the automation protocol subestimate fish numbers but it is anyway suitable for the study of community activity rhythms.
del Río, Joaquín; Aguzzi, Jacopo; Costa, Corrado; Menesatti, Paolo; Sbragaglia, Valerio; Nogueras, Marc; Sarda, Francesc; Manuèl, Antoni
2013-01-01
Field measurements of the swimming activity rhythms of fishes are scant due to the difficulty of counting individuals at a high frequency over a long period of time. Cabled observatory video monitoring allows such a sampling at a high frequency over unlimited periods of time. Unfortunately, automation for the extraction of biological information (i.e., animals' visual counts per unit of time) is still a major bottleneck. In this study, we describe a new automated video-imaging protocol for the 24-h continuous counting of fishes in colorimetrically calibrated time-lapse photographic outputs, taken by a shallow water (20 m depth) cabled video-platform, the OBSEA. The spectral reflectance value for each patch was measured between 400 to 700 nm and then converted into standard RGB, used as a reference for all subsequent calibrations. All the images were acquired within a standardized Region Of Interest (ROI), represented by a 2 × 2 m methacrylate panel, endowed with a 9-colour calibration chart, and calibrated using the recently implemented “3D Thin-Plate Spline” warping approach in order to numerically define color by its coordinates in n-dimensional space. That operation was repeated on a subset of images, 500 images as a training set, manually selected since acquired under optimum visibility conditions. All images plus those for the training set were ordered together through Principal Component Analysis allowing the selection of 614 images (67.6%) out of 908 as a total corresponding to 18 days (at 30 min frequency). The Roberts operator (used in image processing and computer vision for edge detection) was used to highlights regions of high spatial colour gradient corresponding to fishes' bodies. Time series in manual and visual counts were compared together for efficiency evaluation. Periodogram and waveform analysis outputs provided very similar results, although quantified parameters in relation to the strength of respective rhythms were different. Results indicate that automation efficiency is limited by optimum visibility conditions. Data sets from manual counting present the larger day-night fluctuations in comparison to those derived from automation. This comparison indicates that the automation protocol subestimate fish numbers but it is anyway suitable for the study of community activity rhythms. PMID:24177726
[Automated analyser of organ cultured corneal endothelial mosaic].
Gain, P; Thuret, G; Chiquet, C; Gavet, Y; Turc, P H; Théillère, C; Acquart, S; Le Petit, J C; Maugery, J; Campos, L
2002-05-01
Until now, organ-cultured corneal endothelial mosaic has been assessed in France by cell counting using a calibrated graticule, or by drawing cells on a computerized image. The former method is unsatisfactory because it is characterized by a lack of objective evaluation of the cell surface and hexagonality and it requires an experienced technician. The latter method is time-consuming and requires careful attention. We aimed to make an efficient, fast and easy to use, automated digital analyzer of video images of the corneal endothelium. The hardware included a PC Pentium III ((R)) 800 MHz-Ram 256, a Data Translation 3155 acquisition card, a Sony SC 75 CE CCD camera, and a 22-inch screen. Special functions for automated cell boundary determination consisted of Plug-in programs included in the ImageTool software. Calibration was performed using a calibrated micrometer. Cell densities of 40 organ-cultured corneas measured by both manual and automated counting were compared using parametric tests (Student's t test for paired variables and the Pearson correlation coefficient). All steps were considered more ergonomic i.e., endothelial image capture, image selection, thresholding of multiple areas of interest, automated cell count, automated detection of errors in cell boundary drawing, presentation of the results in an HTML file including the number of counted cells, cell density, coefficient of variation of cell area, cell surface histogram and cell hexagonality. The device was efficient because the global process lasted on average 7 minutes and did not require an experienced technician. The correlation between cell densities obtained with both methods was high (r=+0.84, p<0.001). The results showed an under-estimation using manual counting (2191+/-322 vs. 2273+/-457 cell/mm(2), p=0.046), compared with the automated method. Our automated endothelial cell analyzer is efficient and gives reliable results quickly and easily. A multicentric validation would allow us to standardize cell counts among cornea banks in our country.
Parallel computing for automated model calibration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burke, John S.; Danielson, Gary R.; Schulz, Douglas A.
2002-07-29
Natural resources model calibration is a significant burden on computing and staff resources in modeling efforts. Most assessments must consider multiple calibration objectives (for example magnitude and timing of stream flow peak). An automated calibration process that allows real time updating of data/models, allowing scientists to focus effort on improving models is needed. We are in the process of building a fully featured multi objective calibration tool capable of processing multiple models cheaply and efficiently using null cycle computing. Our parallel processing and calibration software routines have been generically, but our focus has been on natural resources model calibration. Somore » far, the natural resources models have been friendly to parallel calibration efforts in that they require no inter-process communication, only need a small amount of input data and only output a small amount of statistical information for each calibration run. A typical auto calibration run might involve running a model 10,000 times with a variety of input parameters and summary statistical output. In the past model calibration has been done against individual models for each data set. The individual model runs are relatively fast, ranging from seconds to minutes. The process was run on a single computer using a simple iterative process. We have completed two Auto Calibration prototypes and are currently designing a more feature rich tool. Our prototypes have focused on running the calibration in a distributed computing cross platform environment. They allow incorporation of?smart? calibration parameter generation (using artificial intelligence processing techniques). Null cycle computing similar to SETI@Home has also been a focus of our efforts. This paper details the design of the latest prototype and discusses our plans for the next revision of the software.« less
Status of the calibration and alignment framework at the Belle II experiment
NASA Astrophysics Data System (ADS)
Dossett, D.; Sevior, M.; Ritter, M.; Kuhr, T.; Bilka, T.; Yaschenko, S.;
2017-10-01
The Belle II detector at the Super KEKB e+e-collider plans to take first collision data in 2018. The monetary and CPU time costs associated with storing and processing the data mean that it is crucial for the detector components at Belle II to be calibrated quickly and accurately. A fast and accurate calibration system would allow the high level trigger to increase the efficiency of event selection, and can give users analysis-quality reconstruction promptly. A flexible framework to automate the fast production of calibration constants is being developed in the Belle II Analysis Software Framework (basf2). Detector experts only need to create two components from C++ base classes in order to use the automation system. The first collects data from Belle II event data files and outputs much smaller files to pass to the second component. This runs the main calibration algorithm to produce calibration constants ready for upload into the conditions database. A Python framework coordinates the input files, order of processing, and submission of jobs. Splitting the operation into collection and algorithm processing stages allows the framework to optionally parallelize the collection stage on a batch system.
Automated Assessment of Postural Stability (AAPS)
2016-10-01
with human volunteers and used our preliminary data to quantify system calibration and limitations of performance. We have also compared our system’s...scoring. Furthermore, we have begun the process of testing with human volunteers and used our preliminary data to quantify system calibration and
Ionospheric Modeling: Development, Verification and Validation
2005-09-01
facilitate the automated processing of a large network of GPS receiver data. 4.; CALIBRATION AND VALIDATION OF IONOSPHERIC SENSORS We have been...NOFS Workshop, Estes Park, CO, January 2005. W. Rideout, A. Coster, P. Doherty, MIT Haystack Automated Processing of GPS Data to Produce Worldwide TEC
40 CFR 1066.215 - Summary of verification procedures for chassis dynamometers.
Code of Federal Regulations, 2014 CFR
2014-07-01
... (CONTINUED) AIR POLLUTION CONTROLS VEHICLE-TESTING PROCEDURES Dynamometer Specifications § 1066.215 Summary... judgment. (c) Automated dynamometer verifications and calibrations. In some cases, dynamometers are... specified in this subpart. You may use these automated functions instead of following the procedures we...
Icing research tunnel rotating bar calibration measurement system
NASA Technical Reports Server (NTRS)
Gibson, Theresa L.; Dearmon, John M.
1993-01-01
In order to measure icing patterns across a test section of the Icing Research Tunnel, an automated rotating bar measurement system was developed at the NASA Lewis Research Center. In comparison with the previously used manual measurement system, this system provides a number of improvements: increased accuracy and repeatability, increased number of data points, reduced tunnel operating time, and improved documentation. The automated system uses a linear variable differential transformer (LVDT) to measure ice accretion. This instrument is driven along the bar by means of an intelligent stepper motor which also controls data recording. This paper describes the rotating bar calibration measurement system.
NASA Technical Reports Server (NTRS)
2008-01-01
The Aquarius Radiometer, a subsystem of the Aquarius Instrument required a data acquisition ground system to support calibration and radiometer performance assessment. To support calibration and compose performance assessments, we developed an automated system which uploaded raw data to a ftp server and saved raw and processed data to a database. This paper details the overall functionalities of the Aquarius Instrument Science Data System (ISDS) and the individual electrical ground support equipment (EGSE) which produced data files that were infused into the ISDS. Real time EGSEs include an ICDS Simulator, Calibration GSE, Labview controlled power supply, and a chamber data acquisition system. ICDS Simulator serves as a test conductor primary workstation, collecting radiometer housekeeping (HK) and science data and passing commands and HK telemetry collection request to the radiometer. Calibration GSE (Radiometer Active Test Source) provides source choice from multiple targets for the radiometer external calibration. Power Supply GSE, controlled by labview, provides real time voltage and current monitoring of the radiometer. And finally the chamber data acquisition system produces data reflecting chamber vacuum pressure, thermistor temperatures, AVG and watts. Each GSE system produce text based data files every two to six minutes and automatically copies the data files to the Central Archiver PC. The Archiver PC stores the data files, schedules automated uploads of these files to an external FTP server, and accepts request to copy all data files to the ISDS for offline data processing and analysis. Aquarius Radiometer ISDS contains PHP and MATLab programs to parse, process and save all data to a MySQL database. Analysis tools (MATLab programs) in the ISDS system are capable of displaying radiometer science, telemetry and auxiliary data in near real time as well as performing data analysis and producing automated performance assessment reports of the Aquarius Radiometer.
Autotune Calibrates Models to Building Use Data
None
2018-01-16
Models of existing buildings are currently unreliable unless calibrated manually by a skilled professional. Autotune, as the name implies, automates this process by calibrating the model of an existing building to measured data, and is now available as open source software. This enables private businesses to incorporate Autotune into their products so that their customers can more effectively estimate cost savings of reduced energy consumption measures in existing buildings.
Automated Calibration of Atmospheric Oxidized Mercury Measurements.
Lyman, Seth; Jones, Colleen; O'Neil, Trevor; Allen, Tanner; Miller, Matthieu; Gustin, Mae Sexauer; Pierce, Ashley M; Luke, Winston; Ren, Xinrong; Kelley, Paul
2016-12-06
The atmosphere is an important reservoir for mercury pollution, and understanding of oxidation processes is essential to elucidating the fate of atmospheric mercury. Several recent studies have shown that a low bias exists in a widely applied method for atmospheric oxidized mercury measurements. We developed an automated, permeation tube-based calibrator for elemental and oxidized mercury, and we integrated this calibrator with atmospheric mercury instrumentation (Tekran 2537/1130/1135 speciation systems) in Reno, Nevada and at Mauna Loa Observatory, Hawaii, U.S.A. While the calibrator has limitations, it was able to routinely inject stable amounts of HgCl 2 and HgBr 2 into atmospheric mercury measurement systems over periods of several months. In Reno, recovery of injected mercury compounds as gaseous oxidized mercury (as opposed to elemental mercury) decreased with increasing specific humidity, as has been shown in other studies, although this trend was not observed at Mauna Loa, likely due to differences in atmospheric chemistry at the two locations. Recovery of injected mercury compounds as oxidized mercury was greater in Mauna Loa than in Reno, and greater still for a cation-exchange membrane-based measurement system. These results show that routine calibration of atmospheric oxidized mercury measurements is both feasible and necessary.
NASA Astrophysics Data System (ADS)
Evans, Aaron H.
Thermal remote sensing is a powerful tool for measuring the spatial variability of evapotranspiration due to the cooling effect of vaporization. The residual method is a popular technique which calculates evapotranspiration by subtracting sensible heat from available energy. Estimating sensible heat requires aerodynamic surface temperature which is difficult to retrieve accurately. Methods such as SEBAL/METRIC correct for this problem by calibrating the relationship between sensible heat and retrieved surface temperature. Disadvantage of these calibrations are 1) user must manually identify extremely dry and wet pixels in image 2) each calibration is only applicable over limited spatial extent. Producing larger maps is operationally limited due to time required to manually calibrate multiple spatial extents over multiple days. This dissertation develops techniques which automatically detect dry and wet pixels. LANDSAT imagery is used because it resolves dry pixels. Calibrations using 1) only dry pixels and 2) including wet pixels are developed. Snapshots of retrieved evaporative fraction and actual evapotranspiration are compared to eddy covariance measurements for five study areas in Florida: 1) Big Cypress 2) Disney Wilderness 3) Everglades 4) near Gainesville, FL. 5) Kennedy Space Center. The sensitivity of evaporative fraction to temperature, available energy, roughness length and wind speed is tested. A technique for temporally interpolating evapotranspiration by fusing LANDSAT and MODIS is developed and tested. The automated algorithm is successful at detecting wet and dry pixels (if they exist). Including wet pixels in calibration and assuming constant atmospheric conductance significantly improved results for all but Big Cypress and Gainesville. Evaporative fraction is not very sensitive to instantaneous available energy but it is sensitive to temperature when wet pixels are included because temperature is required for estimating wet pixel evapotranspiration. Data fusion techniques only slightly outperformed linear interpolation. Eddy covariance comparison and temporal interpolation produced acceptable bias error for most cases suggesting automated calibration and interpolation could be used to predict monthly or annual ET. Maps demonstrating spatial patterns of evapotranspiration at field scale were successfully produced, but only for limited spatial extents. A framework has been established for producing larger maps by creating a mosaic of smaller individual maps.
NASA Astrophysics Data System (ADS)
Ryan, D. P.; Roth, G. S.
1982-04-01
Complete documentation of the 15 programs and 11 data files of the EPA Atomic Absorption Instrument Automation System is presented. The system incorporates the following major features: (1) multipoint calibration using first, second, or third degree regression or linear interpolation, (2) timely quality control assessments for spiked samples, duplicates, laboratory control standards, reagent blanks, and instrument check standards, (3) reagent blank subtraction, and (4) plotting of calibration curves and raw data peaks. The programs of this system are written in Data General Extended BASIC, Revision 4.3, as enhanced for multi-user, real-time data acquisition. They run in a Data General Nova 840 minicomputer under the operating system RDOS, Revision 6.2. There is a functional description, a symbol definitions table, a functional flowchart, a program listing, and a symbol cross reference table for each program. The structure of every data file is also detailed.
Automatic analysis of quantitative NMR data of pharmaceutical compound libraries.
Liu, Xuejun; Kolpak, Michael X; Wu, Jiejun; Leo, Gregory C
2012-08-07
In drug discovery, chemical library compounds are usually dissolved in DMSO at a certain concentration and then distributed to biologists for target screening. Quantitative (1)H NMR (qNMR) is the preferred method for the determination of the actual concentrations of compounds because the relative single proton peak areas of two chemical species represent the relative molar concentrations of the two compounds, that is, the compound of interest and a calibrant. Thus, an analyte concentration can be determined using a calibration compound at a known concentration. One particularly time-consuming step in the qNMR analysis of compound libraries is the manual integration of peaks. In this report is presented an automated method for performing this task without prior knowledge of compound structures and by using an external calibration spectrum. The script for automated integration is fast and adaptable to large-scale data sets, eliminating the need for manual integration in ~80% of the cases.
2012-02-09
The calibrated data are then sent to NRL Stennis Space Center (NRL-SSC) for further processing using the NRL SSC Automated Processing System (APS...hyperspectral sensor in space we have not previously developed automated processing for hyperspectral ocean color data. The hyperspectral processing branch
Enhancing Ear and Hearing Health Access for Children With Technology and Connectivity.
Swanepoel, De Wet
2017-10-12
Technology and connectivity advances are demonstrating increasing potential to improve access of service delivery to persons with hearing loss. This article demonstrates use cases from community-based hearing screening and automated diagnosis of ear disease. This brief report reviews recent evidence for school- and home-based hearing testing in underserved communities using smartphone technologies paired with calibrated headphones. Another area of potential impact facilitated by technology and connectivity is the use of feature extraction algorithms to facilitate automated diagnosis of most common ear conditions from video-otoscopic images. Smartphone hearing screening using calibrated headphones demonstrated equivalent sensitivity and specificity for school-based hearing screening. Automating test sequences with a forced-choice response paradigm allowed persons with minimal training to offer screening in underserved communities. The automated image analysis and diagnosis system for ear disease demonstrated an overall accuracy of 80.6%, which is up to par and exceeds accuracy rates previously reported for general practitioners and pediatricians. The emergence of these tools that capitalize on technology and connectivity advances enables affordable and accessible models of service delivery for community-based ear and hearing care.
Building a framework to manage trust in automation
NASA Astrophysics Data System (ADS)
Metcalfe, J. S.; Marathe, A. R.; Haynes, B.; Paul, V. J.; Gremillion, G. M.; Drnec, K.; Atwater, C.; Estepp, J. R.; Lukos, J. R.; Carter, E. C.; Nothwang, W. D.
2017-05-01
All automations must, at some point in their lifecycle, interface with one or more humans. Whether operators, end-users, or bystanders, human responses can determine the perceived utility and acceptance of an automation. It has been long believed that human trust is a primary determinant of human-automation interactions and further presumed that calibrating trust can lead to appropriate choices regarding automation use. However, attempts to improve joint system performance by calibrating trust have not yet provided a generalizable solution. To address this, we identified several factors limiting the direct integration of trust, or metrics thereof, into an active mitigation strategy. The present paper outlines our approach to addressing this important issue, its conceptual underpinnings, and practical challenges encountered in execution. Among the most critical outcomes has been a shift in focus from trust to basic interaction behaviors and their antecedent decisions. This change in focus inspired the development of a testbed and paradigm that was deployed in two experiments of human interactions with driving automation that were executed in an immersive, full-motion simulation environment. Moreover, by integrating a behavior and physiology-based predictor within a novel consequence-based control system, we demonstrated that it is possible to anticipate particular interaction behaviors and influence humans towards more optimal choices about automation use in real time. Importantly, this research provides a fertile foundation for the development and integration of advanced, wearable technologies for sensing and inferring critical state variables for better integration of human elements into otherwise fully autonomous systems.
USDA-ARS?s Scientific Manuscript database
The use of distributed parameter models to address water resource management problems has increased in recent years. Calibration is necessary to reduce the uncertainties associated with model input parameters. Manual calibration of a distributed parameter model is a very time consuming effort. There...
Yeung, Joanne Chung Yan; de Lannoy, Inés; Gien, Brad; Vuckovic, Dajana; Yang, Yingbo; Bojko, Barbara; Pawliszyn, Janusz
2012-09-12
In vivo solid-phase microextraction (SPME) can be used to sample the circulating blood of animals without the need to withdraw a representative blood sample. In this study, in vivo SPME in combination with liquid-chromatography tandem mass spectrometry (LC-MS/MS) was used to determine the pharmacokinetics of two drug analytes, R,R-fenoterol and R,R-methoxyfenoterol, administered as 5 mg kg(-1) i.v. bolus doses to groups of 5 rats. This research illustrates, for the first time, the feasibility of the diffusion-based calibration interface model for in vivo SPME studies. To provide a constant sampling rate as required for the diffusion-based interface model, partial automation of the SPME sampling of the analytes from the circulating blood was accomplished using an automated blood sampling system. The use of the blood sampling system allowed automation of all SPME sampling steps in vivo, except for the insertion and removal of the SPME probe from the sampling interface. The results from in vivo SPME were compared to the conventional method based on blood withdrawal and sample clean up by plasma protein precipitation. Both whole blood and plasma concentrations were determined by the conventional method. The concentrations of methoxyfenoterol and fenoterol obtained by SPME generally concur with the whole blood concentrations determined by the conventional method indicating the utility of the proposed method. The proposed diffusion-based interface model has several advantages over other kinetic calibration models for in vivo SPME sampling including (i) it does not require the addition of a standard into the sample matrix during in vivo studies, (ii) it is simple and rapid and eliminates the need to pre-load appropriate standard onto the SPME extraction phase and (iii) the calibration constant for SPME can be calculated based on the diffusion coefficient, extraction time, fiber length and radius, and size of the boundary layer. In the current study, the experimental calibration constants of 338.9±30 mm(-3) and 298.5±25 mm(-3) are in excellent agreement with the theoretical calibration constants of 307.9 mm(-3) and 316.0 mm(-3) for fenoterol and methoxyfenoterol respectively. Copyright © 2012 Elsevier B.V. All rights reserved.
Comprehensive Calibration and Validation Site for Information Remote Sensing
NASA Astrophysics Data System (ADS)
Li, C. R.; Tang, L. L.; Ma, L. L.; Zhou, Y. S.; Gao, C. X.; Wang, N.; Li, X. H.; Wang, X. H.; Zhu, X. H.
2015-04-01
As a naturally part of information technology, Remote Sensing (RS) is strongly required to provide very precise and accurate information product to serve industry, academy and the public at this information economic era. To meet the needs of high quality RS product, building a fully functional and advanced calibration system, including measuring instruments, measuring approaches and target site become extremely important. Supported by MOST of China via national plan, great progress has been made to construct a comprehensive calibration and validation (Cal&Val) site, which integrates most functions of RS sensor aviation testing, EO satellite on-orbit caration and performance assessment and RS product validation at this site located in Baotou, 600km west of Beijing. The site is equipped with various artificial standard targets, including portable and permanent targets, which supports for long-term calibration and validation. A number of fine-designed ground measuring instruments and airborne standard sensors are developed for realizing high-accuracy stepwise validation, an approach in avoiding or reducing uncertainties caused from nonsynchronized measurement. As part of contribution to worldwide Cal&Val study coordinated by CEOS-WGCV, Baotou site is offering its support to Radiometric Calibration Network of Automated Instruments (RadCalNet), with an aim of providing demonstrated global standard automated radiometric calibration service in cooperation with ESA, NASA, CNES and NPL. Furthermore, several Cal&Val campaigns have been performed during the past years to calibrate and validate the spaceborne/airborne optical and SAR sensors, and the results of some typical demonstration are discussed in this study.
2012-02-01
use the ERDC software implementation of the secant LM method that accommodates the PEST model independent interface to calibrate a GSSHA...how the method works. We will also demonstrate how our LM/SLM implementation compares with its counterparts as implemented in the popular PEST ...function values and total model calls for local search to converge) associated with Examples 1 and 3 using the PEST LM/SLM implementations
NASA Astrophysics Data System (ADS)
Boyarnikov, A. V.; Boyarnikova, L. V.; Kozhushko, A. A.; Sekachev, A. F.
2017-08-01
In the article the process of verification (calibration) of oil metering units secondary equipment is considered. The purpose of the work is to increase the reliability and reduce the complexity of this process by developing a software and hardware system that provides automated verification and calibration. The hardware part of this complex carries out the commutation of the measuring channels of the verified controller and the reference channels of the calibrator in accordance with the introduced algorithm. The developed software allows controlling the commutation of channels, setting values on the calibrator, reading the measured data from the controller, calculating errors and compiling protocols. This system can be used for checking the controllers of the secondary equipment of the oil metering units in the automatic verification mode (with the open communication protocol) or in the semi-automatic verification mode (without it). The peculiar feature of the approach used is the development of a universal signal switch operating under software control, which can be configured for various verification methods (calibration), which allows to cover the entire range of controllers of metering units secondary equipment. The use of automatic verification with the help of a hardware and software system allows to shorten the verification time by 5-10 times and to increase the reliability of measurements, excluding the influence of the human factor.
Life Sciences Research Facility automation requirements and concepts for the Space Station
NASA Technical Reports Server (NTRS)
Rasmussen, Daryl N.
1986-01-01
An evaluation is made of the methods and preliminary results of a study on prospects for the automation of the NASA Space Station's Life Sciences Research Facility. In order to remain within current Space Station resource allocations, approximately 85 percent of planned life science experiment tasks must be automated; these tasks encompass specimen care and feeding, cage and instrument cleaning, data acquisition and control, sample analysis, waste management, instrument calibration, materials inventory and management, and janitorial work. Task automation will free crews for specimen manipulation, tissue sampling, data interpretation and communication with ground controllers, and experiment management.
A completely automated flow, heat-capacity, calorimeter for use at high temperatures and pressures
NASA Astrophysics Data System (ADS)
Rogers, P. S. Z.; Sandarusi, Jamal
1990-11-01
An automated, flow calorimeter has been constructed to measure the isobaric heat capacities of concentrated, aqueous electrolyte solutions using a differential calorimetry technique. The calorimeter is capable of operation to 700 K and 40 MPa with a measurement accuracy of 0.03% relative to the heat capacity of the pure reference fluid (water). A novel design encloses the calorimeter within a double set of separately controlled, copper, adiabatic shields that minimize calorimeter heat losses and precisely control the temperature of the inlet fluids. A multistage preheat train, used to efficiently heat the flowing fluid, includes a counter-current heat exchanger for the inlet and outlet fluid streams in tandem with two calorimeter preheaters. Complete system automation is accomplished with a distributed control scheme using multiple processors, allowing the major control tasks of calorimeter operation and control, data logging and display, and pump control to be performed simultaneously. A sophisticated pumping strategy for the two separate syringe pumps allows continuous fluid delivery. This automation system enables the calorimeter to operate unattended except for the reloading of sample fluids. In addition, automation has allowed the development and implementation of an improved heat loss calibration method that provides calorimeter calibration with absolute accuracy comparable to the overall measurement precision, even for very concentrated solutions.
Data processing and in-flight calibration systems for OMI-EOS-Aura
NASA Astrophysics Data System (ADS)
van den Oord, G. H. J.; Dobber, M.; van de Vegte, J.; van der Neut, I.; Som de Cerff, W.; Rozemeijer, N. C.; Schenkelaars, V.; ter Linden, M.
2006-08-01
The OMI instrument that flies on the EOS Aura mission was launched in July 2004. OMI is a UV-VIS imaging spectrometer that measures in the 270 - 500 nm wavelength range. OMI provides daily global coverage with high spatial resolution. Every orbit of 100 minutes OMI generates about 0.5 GB of Level 0 data and 1.2 GB of Level 1 data. About half of the Level 1 data consists of in-flight calibration measurements. These data rates make it necessary to automate the process of in-flight calibration. For that purpose two facilities have been developed at KNMI in the Netherlands: the OMI Dutch Processing System (ODPS) and the Trend Monitoring and In-flight Calibration Facility (TMCF). A description of these systems is provided with emphasis on the use for radiometric, spectral and detector calibration and characterization. With the advance of detector technology and the need for higher spatial resolution, data rates will become even higher for future missions. To make effective use of automated systems like the TMCF, it is of paramount importance to integrate the instrument operations concept, the information contained in the Level 1 (meta-)data products and the inflight calibration software and system databases. In this way a robust but also flexible end-to-end system can be developed that serves the needs of the calibration staff, the scientific data users and the processing staff. The way this has been implemented for OMI may serve as an example of a cost-effective and user friendly solution for future missions. The basic system requirements for in-flight calibration are discussed and examples are given how these requirements have been implemented for OMI. Special attention is paid to the aspect of supporting the Level 0 - 1 processing with timely and accurate calibration constants.
Assessing Writing in MOOCs: Automated Essay Scoring and Calibrated Peer Review™
ERIC Educational Resources Information Center
Balfour, Stephen P.
2013-01-01
Two of the largest Massive Open Online Course (MOOC) organizations have chosen different methods for the way they will score and provide feedback on essays students submit. EdX, MIT and Harvard's non-profit MOOC federation, recently announced that they will use a machine-based Automated Essay Scoring (AES) application to assess written work in…
NASA Astrophysics Data System (ADS)
Rivers, Thane D.
1992-06-01
An Automated Scanning Monochromator was developed using: an Acton Research Corporation (ARC) Monochromator, Ealing Photomultiplier Tube and a Macintosh PC in conjunction with LabVIEW software. The LabVIEW Virtual Instrument written to operate the ARC Monochromator is a mouse driven user friendly program developed for automated spectral data measurements. Resolution and sensitivity of the Automated Scanning Monochromator System were determined experimentally. The Automated monochromator was then used for spectral measurements of a Platinum Lamp. Additionally, the reflectivity curve for a BaSO4 coated screen has been measured. Reflectivity measurements indicate a large discrepancy with expected results. Further analysis of the reflectivity experiment is required for conclusive results.
Li, Wei; Abram, François; Pelletier, Jean-Pierre; Raynauld, Jean-Pierre; Dorais, Marc; d'Anjou, Marc-André; Martel-Pelletier, Johanne
2010-01-01
Joint effusion is frequently associated with osteoarthritis (OA) flare-up and is an important marker of therapeutic response. This study aimed at developing and validating a fully automated system based on magnetic resonance imaging (MRI) for the quantification of joint effusion volume in knee OA patients. MRI examinations consisted of two axial sequences: a T2-weighted true fast imaging with steady-state precession and a T1-weighted gradient echo. An automated joint effusion volume quantification system using MRI was developed and validated (a) with calibrated phantoms (cylinder and sphere) and effusion from knee OA patients; (b) with assessment by manual quantification; and (c) by direct aspiration. Twenty-five knee OA patients with joint effusion were included in the study. The automated joint effusion volume quantification was developed as a four stage sequencing process: bone segmentation, filtering of unrelated structures, segmentation of joint effusion, and subvoxel volume calculation. Validation experiments revealed excellent coefficients of variation with the calibrated cylinder (1.4%) and sphere (0.8%) phantoms. Comparison of the OA knee joint effusion volume assessed by the developed automated system and by manual quantification was also excellent (r = 0.98; P < 0.0001), as was the comparison with direct aspiration (r = 0.88; P = 0.0008). The newly developed fully automated MRI-based system provided precise quantification of OA knee joint effusion volume with excellent correlation with data from phantoms, a manual system, and joint aspiration. Such an automated system will be instrumental in improving the reproducibility/reliability of the evaluation of this marker in clinical application.
A critical evaluation of automated blood gas measurements in comparative respiratory physiology.
Malte, Christian Lind; Jakobsen, Sashia Lindhøj; Wang, Tobias
2014-12-01
Precise measurements of blood gases and pH are of pivotal importance to respiratory physiology. However, the traditional electrodes that could be calibrated and maintained at the same temperature as the experimental animal are increasingly being replaced by new automated blood gas analyzers. These are typically designed for clinical use and automatically heat the blood sample to 37°C for measurements. While most blood gas analyzers allow for temperature corrections of the measurements, the underlying algorithms are based on temperature-effects for human blood, and any discrepancies in the temperature dependency between the blood sample from a given species and human samples will bias measurements. In this study we review the effects of temperature on blood gases and pH and evaluate the performance of an automated blood gas analyzer (GEM Premier 3500). Whole blood obtained from pythons and freshwater turtles was equilibrated in rotating Eschweiler tonometers to a variety of known P(O2)'s and P(CO2)'s in gas mixtures prepared by Wösthoff gas mixing pumps and blood samples were measured immediately on the GEM Premier 3500. The pH measurements were compared to measurements using a Radiometer BMS glass capillary pH electrode kept and calibrated at the experimental temperature. We show that while the blood gas analyzer provides reliable temperature-corrections for P(CO2) and pH, P(O2) measurements were substantially biased. This was in agreement with the theoretical considerations and emphasizes the need for critical calibrations/corrections when using automated blood gas analyzers. Copyright © 2014 Elsevier Inc. All rights reserved.
An automated data collection system for a Charpy impact tester
NASA Technical Reports Server (NTRS)
Weigman, Bernard J.; Spiegel, F. Xavier
1993-01-01
A method for automated data collection has been developed for a Charpy impact tester. A potentiometer is connected to the pivot point of the hammer and measures the angular displacement of the hammer. This data is collected with a computer and, through appropriate software, accurately records the energy absorbed by the specimen. The device can be easily calibrated with minimal effort.
Sochor, Jiri; Ryvolova, Marketa; Krystofova, Olga; Salas, Petr; Hubalek, Jaromir; Adam, Vojtech; Trnkova, Libuse; Havel, Ladislav; Beklova, Miroslava; Zehnalek, Josef; Provaznik, Ivo; Kizek, Rene
2010-11-29
The aim of this study was to describe behaviour, kinetics, time courses and limitations of the six different fully automated spectrometric methods--DPPH, TEAC, FRAP, DMPD, Free Radicals and Blue CrO5. Absorption curves were measured and absorbance maxima were found. All methods were calibrated using the standard compounds Trolox® and/or gallic acid. Calibration curves were determined (relative standard deviation was within the range from 1.5 to 2.5%). The obtained characteristics were compared and discussed. Moreover, the data obtained were applied to optimize and to automate all mentioned protocols. Automatic analyzer allowed us to analyse simultaneously larger set of samples, to decrease the measurement time, to eliminate the errors and to provide data of higher quality in comparison to manual analysis. The total time of analysis for one sample was decreased to 10 min for all six methods. In contrary, the total time of manual spectrometric determination was approximately 120 min. The obtained data provided good correlations between studied methods (R=0.97-0.99).
Flow rate calibration to determine cell-derived microparticles and homogeneity of blood components.
Noulsri, Egarit; Lerdwana, Surada; Kittisares, Kulvara; Palasuwan, Attakorn; Palasuwan, Duangdao
2017-08-01
Cell-derived microparticles (MPs) are currently of great interest to screening transfusion donors and blood components. However, the current approach to counting MPs is not affordable for routine laboratory use due to its high cost. The current study aimed to investigate the potential use of flow-rate calibration for counting MPs in whole blood, packed red blood cells (PRBCs), and platelet concentrates (PCs). The accuracy of flow-rate calibration was investigated by comparing the platelet counts of an automated counter and a flow-rate calibrator. The concentration of MPs and their origins in whole blood (n=100), PRBCs (n=100), and PCs (n=92) were determined using a FACSCalibur. The MPs' fold-changes were calculated to assess the homogeneity of the blood components. Comparing the platelet counts conducted by automated counting and flow-rate calibration showed an r 2 of 0.6 (y=0.69x+97,620). The CVs of the within-run and between-run variations of flow-rate calibration were 8.2% and 12.1%, respectively. The Bland-Altman plot showed a mean bias of -31,142platelets/μl. MP enumeration revealed both the difference in MP levels and their origins in whole blood, PRBCs, and PCs. Screening the blood components demonstrated high heterogeneity of the MP levels in PCs when compared to whole blood and PRBCs. The results of the present study suggest the accuracy and precision of flow-rate calibration for enumerating MPs. This flow-rate approach is affordable for assessing the homogeneity of MPs in blood components in routine laboratory practice. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Dejong, Gerrit; Polderman, Michel C.
1995-01-01
The measurement of the difference of the transmit and receive delays of the signals in a Two-Way Satellite Time and Frequency Transfer (TWSTFT) Earth station is crucial for its nanosecond time transfer capability. Also, the monitoring of the change of this delay difference with time, temperature, humidity, or barometric pressure is important for improving the TWSTFT capabilities. An automated system for this purpose has been developed from the initial design at NMi-VSL. It calibrates separately the transmit and receive delays in cables, amplifiers, upconverters and downconverters, and antenna feeds. The obtained results can be applied as corrections to the TWSTFT measurement when, before and after a measurement session, a calibration session is performed. Preliminary results obtained at NMi-VSL will be shown. Also, if available, the results of a manual version of the system that is planned to be circulated in Sept. 1994 together with a USNO portable station on a calibration trip to European TWSTFT Earth stations.
Self-Calibrating Pressure Transducer
NASA Technical Reports Server (NTRS)
Lueck, Dale E. (Inventor)
2006-01-01
A self-calibrating pressure transducer is disclosed. The device uses an embedded zirconia membrane which pumps a determined quantity of oxygen into the device. The associated pressure can be determined, and thus, the transducer pressure readings can be calibrated. The zirconia membrane obtains oxygen .from the surrounding environment when possible. Otherwise, an oxygen reservoir or other source is utilized. In another embodiment, a reversible fuel cell assembly is used to pump oxygen and hydrogen into the system. Since a known amount of gas is pumped across the cell, the pressure produced can be determined, and thus, the device can be calibrated. An isolation valve system is used to allow the device to be calibrated in situ. Calibration is optionally automated so that calibration can be continuously monitored. The device is preferably a fully integrated MEMS device. Since the device can be calibrated without removing it from the process, reductions in costs and down time are realized.
ERIC Educational Resources Information Center
Zhang, Mo; Williamson, David M.; Breyer, F. Jay; Trapani, Catherine
2012-01-01
This article describes two separate, related studies that provide insight into the effectiveness of "e-rater" score calibration methods based on different distributional targets. In the first study, we developed and evaluated a new type of "e-rater" scoring model that was cost-effective and applicable under conditions of absent human rating and…
Device for modular input high-speed multi-channel digitizing of electrical data
VanDeusen, Alan L.; Crist, Charles E.
1995-09-26
A multi-channel high-speed digitizer module converts a plurality of analog signals to digital signals (digitizing) and stores the signals in a memory device. The analog input channels are digitized simultaneously at high speed with a relatively large number of on-board memory data points per channel. The module provides an automated calibration based upon a single voltage reference source. Low signal noise at such a high density and sample rate is accomplished by ensuring the A/D converters are clocked at the same point in the noise cycle each time so that synchronous noise sampling occurs. This sampling process, in conjunction with an automated calibration, yields signal noise levels well below the noise level present on the analog reference voltages.
An automated calibration laboratory - Requirements and design approach
NASA Technical Reports Server (NTRS)
O'Neil-Rood, Nora; Glover, Richard D.
1990-01-01
NASA's Dryden Flight Research Facility (Ames-Dryden), operates a diverse fleet of research aircraft which are heavily instrumented to provide both real time data for in-flight monitoring and recorded data for postflight analysis. Ames-Dryden's existing automated calibration (AUTOCAL) laboratory is a computerized facility which tests aircraft sensors to certify accuracy for anticipated harsh flight environments. Recently, a major AUTOCAL lab upgrade was initiated; the goal of this modernization is to enhance productivity and improve configuration management for both software and test data. The new system will have multiple testing stations employing distributed processing linked by a local area network to a centralized database. The baseline requirements for the new AUTOCAL lab and the design approach being taken for its mechanization are described.
A new method for automated dynamic calibration of tipping-bucket rain gauges
Humphrey, M.D.; Istok, J.D.; Lee, J.Y.; Hevesi, J.A.; Flint, A.L.
1997-01-01
Existing methods for dynamic calibration of tipping-bucket rain gauges (TBRs) can be time consuming and labor intensive. A new automated dynamic calibration system has been developed to calibrate TBRs with minimal effort. The system consists of a programmable pump, datalogger, digital balance, and computer. Calibration is performed in two steps: 1) pump calibration and 2) rain gauge calibration. Pump calibration ensures precise control of water flow rates delivered to the rain gauge funnel; rain gauge calibration ensures precise conversion of bucket tip times to actual rainfall rates. Calibration of the pump and one rain gauge for 10 selected pump rates typically requires about 8 h. Data files generated during rain gauge calibration are used to compute rainfall intensities and amounts from a record of bucket tip times collected in the field. The system was tested using 5 types of commercial TBRs (15.2-, 20.3-, and 30.5-cm diameters; 0.1-, 0.2-, and 1.0-mm resolutions) and using 14 TBRs of a single type (20.3-cm diameter; 0.1-mm resolution). Ten pump rates ranging from 3 to 154 mL min-1 were used to calibrate the TBRs and represented rainfall rates between 6 and 254 mm h-1 depending on the rain gauge diameter. All pump calibration results were very linear with R2 values greater than 0.99. All rain gauges exhibited large nonlinear underestimation errors (between 5% and 29%) that decreased with increasing rain gauge resolution and increased with increasing rainfall rate, especially for rates greater than 50 mm h-1. Calibration curves of bucket tip time against the reciprocal of the true pump rate for all rain gauges also were linear with R2 values of 0.99. Calibration data for the 14 rain gauges of the same type were very similar, as indicated by slope values that were within 14% of each other and ranged from about 367 to 417 s mm h-1. The developed system can calibrate TBRs efficiently, accurately, and virtually unattended and could be modified for use with other rain gauge designs. The system is now in routine use to calibrate TBRs in a large rainfall collection network at Yucca Mountain, Nevada.
NASA Astrophysics Data System (ADS)
Czapla-Myers, Jeffrey S.; Anderson, Nikolaus J.
2017-09-01
The Radiometric Calibration Test Site (RadCaTS) is an automated facility developed by the Remote Sensing Group (RSG) at the University of Arizona to provide radiometric calibration data for airborne and satellite sensors. RadCaTS uses stationary ground-viewing radiometers (GVRs) to spatially sample the surface reflectance of the site. The number and location of the GVRs is based on previous spatial, spectral, and temporal analyses of Railroad Valley. With the increase in high-resolution satellite sensors, there is renewed interest in examining the spatial uniformity the 1-km2 RadCaTS area at scales smaller than a typical 30-m sensor. RadCaTS is one of the four instrumented sites currently in the CEOS WGCV Radiometric Calibration Network (RadCalNet), which aims to harmonize the post-launch radiometric calibration of satellite sensors through the use of a global network of automated calibration sites. A better understanding of the RadCaTS spatial uniformity as a function of pixel size will also benefit the RadCalNet work. RSG has recently acquired a commercially-available small unmanned airborne system (sUAS) system, with which preliminary spatial homogeneity measurements of the 1-km2 RadCaTS area were made. This work describes an initial assessment of the airborne platform and integrated camera for spatial studies of RadCaTS using data that were collected in 2016 and 2017.
Michael Palace; Michael Keller; Gregory P. Asner; Stephen Hagen; Bobby Braswell
2008-01-01
We developed an automated tree crown analysis algorithm using 1-m panchromatic IKONOS satellite images to examine forest canopy structure in the Brazilian Amazon. The algorithm was calibrated on the landscape level with tree geometry and forest stand data at the Fazenda Cauaxi (3.75◦ S, 48.37◦ W) in the eastern Amazon, and then compared with forest...
Step wise, multiple objective calibration of a hydrologic model for a snowmelt dominated basin
Hay, L.E.; Leavesley, G.H.; Clark, M.P.; Markstrom, S.L.; Viger, R.J.; Umemoto, M.
2006-01-01
The ability to apply a hydrologic model to large numbers of basins for forecasting purposes requires a quick and effective calibration strategy. This paper presents a step wise, multiple objective, automated procedure for hydrologic model calibration. This procedure includes the sequential calibration of a model's simulation of solar radiation (SR), potential evapotranspiration (PET), water balance, and daily runoff. The procedure uses the Shuffled Complex Evolution global search algorithm to calibrate the U.S. Geological Survey's Precipitation Runoff Modeling System in the Yampa River basin of Colorado. This process assures that intermediate states of the model (SR and PET on a monthly mean basis), as well as the water balance and components of the daily hydrograph are simulated, consistently with measured values.
Chimenea and other tools: Automated imaging of multi-epoch radio-synthesis data with CASA
NASA Astrophysics Data System (ADS)
Staley, T. D.; Anderson, G. E.
2015-11-01
In preparing the way for the Square Kilometre Array and its pathfinders, there is a pressing need to begin probing the transient sky in a fully robotic fashion using the current generation of radio telescopes. Effective exploitation of such surveys requires a largely automated data-reduction process. This paper introduces an end-to-end automated reduction pipeline, AMIsurvey, used for calibrating and imaging data from the Arcminute Microkelvin Imager Large Array. AMIsurvey makes use of several component libraries which have been packaged separately for open-source release. The most scientifically significant of these is chimenea, which implements a telescope-agnostic algorithm for automated imaging of pre-calibrated multi-epoch radio-synthesis data, of the sort typically acquired for transient surveys or follow-up. The algorithm aims to improve upon standard imaging pipelines by utilizing iterative RMS-estimation and automated source-detection to avoid so called 'Clean-bias', and makes use of CASA subroutines for the underlying image-synthesis operations. At a lower level, AMIsurvey relies upon two libraries, drive-ami and drive-casa, built to allow use of mature radio-astronomy software packages from within Python scripts. While targeted at automated imaging, the drive-casa interface can also be used to automate interaction with any of the CASA subroutines from a generic Python process. Additionally, these packages may be of wider technical interest beyond radio-astronomy, since they demonstrate use of the Python library pexpect to emulate terminal interaction with an external process. This approach allows for rapid development of a Python interface to any legacy or externally-maintained pipeline which accepts command-line input, without requiring alterations to the original code.
2010-01-01
Introduction Joint effusion is frequently associated with osteoarthritis (OA) flare-up and is an important marker of therapeutic response. This study aimed at developing and validating a fully automated system based on magnetic resonance imaging (MRI) for the quantification of joint effusion volume in knee OA patients. Methods MRI examinations consisted of two axial sequences: a T2-weighted true fast imaging with steady-state precession and a T1-weighted gradient echo. An automated joint effusion volume quantification system using MRI was developed and validated (a) with calibrated phantoms (cylinder and sphere) and effusion from knee OA patients; (b) with assessment by manual quantification; and (c) by direct aspiration. Twenty-five knee OA patients with joint effusion were included in the study. Results The automated joint effusion volume quantification was developed as a four stage sequencing process: bone segmentation, filtering of unrelated structures, segmentation of joint effusion, and subvoxel volume calculation. Validation experiments revealed excellent coefficients of variation with the calibrated cylinder (1.4%) and sphere (0.8%) phantoms. Comparison of the OA knee joint effusion volume assessed by the developed automated system and by manual quantification was also excellent (r = 0.98; P < 0.0001), as was the comparison with direct aspiration (r = 0.88; P = 0.0008). Conclusions The newly developed fully automated MRI-based system provided precise quantification of OA knee joint effusion volume with excellent correlation with data from phantoms, a manual system, and joint aspiration. Such an automated system will be instrumental in improving the reproducibility/reliability of the evaluation of this marker in clinical application. PMID:20846392
Construction and calibration of a low cost and fully automated vibrating sample magnetometer
NASA Astrophysics Data System (ADS)
El-Alaily, T. M.; El-Nimr, M. K.; Saafan, S. A.; Kamel, M. M.; Meaz, T. M.; Assar, S. T.
2015-07-01
A low cost vibrating sample magnetometer (VSM) has been constructed by using an electromagnet and an audio loud speaker; where both are controlled by a data acquisition device. The constructed VSM records the magnetic hysteresis loop up to 8.3 KG at room temperature. The apparatus has been calibrated and tested by using magnetic hysteresis data of some ferrite samples measured by two scientifically calibrated magnetometers; model (Lake Shore 7410) and model (LDJ Electronics Inc. Troy, MI). Our VSM lab-built new design proved success and reliability.
Automation of Endmember Pixel Selection in SEBAL/METRIC Model
NASA Astrophysics Data System (ADS)
Bhattarai, N.; Quackenbush, L. J.; Im, J.; Shaw, S. B.
2015-12-01
The commonly applied surface energy balance for land (SEBAL) and its variant, mapping evapotranspiration (ET) at high resolution with internalized calibration (METRIC) models require manual selection of endmember (i.e. hot and cold) pixels to calibrate sensible heat flux. Current approaches for automating this process are based on statistical methods and do not appear to be robust under varying climate conditions and seasons. In this paper, we introduce a new approach based on simple machine learning tools and search algorithms that provides an automatic and time efficient way of identifying endmember pixels for use in these models. The fully automated models were applied on over 100 cloud-free Landsat images with each image covering several eddy covariance flux sites in Florida and Oklahoma. Observed land surface temperatures at automatically identified hot and cold pixels were within 0.5% of those from pixels manually identified by an experienced operator (coefficient of determination, R2, ≥ 0.92, Nash-Sutcliffe efficiency, NSE, ≥ 0.92, and root mean squared error, RMSE, ≤ 1.67 K). Daily ET estimates derived from the automated SEBAL and METRIC models were in good agreement with their manual counterparts (e.g., NSE ≥ 0.91 and RMSE ≤ 0.35 mm day-1). Automated and manual pixel selection resulted in similar estimates of observed ET across all sites. The proposed approach should reduce time demands for applying SEBAL/METRIC models and allow for their more widespread and frequent use. This automation can also reduce potential bias that could be introduced by an inexperienced operator and extend the domain of the models to new users.
Pleil, Joachim D; Angrish, Michelle M; Madden, Michael C
2015-12-11
Immunochemistry is an important clinical tool for indicating biological pathways leading towards disease. Standard enzyme-linked immunosorbent assays (ELISA) are labor intensive and lack sensitivity at low-level concentrations. Here we report on emerging technology implementing fully-automated ELISA capable of molecular level detection and describe application to exhaled breath condensate (EBC) samples. The Quanterix SIMOA HD-1 analyzer was evaluated for analytical performance for inflammatory cytokines (IL-6, TNF-α, IL-1β and IL-8). The system was challenged with human EBC representing the most dilute and analytically difficult of the biological media. Calibrations from synthetic samples and spiked EBC showed excellent linearity at trace levels (r(2) > 0.99). Sensitivities varied by analyte, but were robust from ~0.006 (IL-6) to ~0.01 (TNF-α) pg ml(-1). All analytes demonstrated response suppression when diluted with deionized water and so assay buffer diluent was found to be a better choice. Analytical runs required ~45 min setup time for loading samples, reagents, calibrants, etc., after which the instrument performs without further intervention for up to 288 separate samples. Currently, available kits are limited to single-plex analyses and so sample volumes require adjustments. Sample dilutions should be made with assay diluent to avoid response suppression. Automation performs seamlessly and data are automatically analyzed and reported in spreadsheet format. The internal 5-parameter logistic (pl) calibration model should be supplemented with a linear regression spline at the very lowest analyte levels, (<1.3 pg ml(-1)). The implementation of the automated Quanterix platform was successfully demonstrated using EBC, which poses the greatest challenge to ELISA due to limited sample volumes and low protein levels.
X-Band Acquisition Aid Software
NASA Technical Reports Server (NTRS)
Britcliffe, Michael J.; Strain, Martha M.; Wert, Michael
2011-01-01
The X-band Acquisition Aid (AAP) software is a low-cost acquisition aid for the Deep Space Network (DSN) antennas, and is used while acquiring a spacecraft shortly after it has launched. When enabled, the acquisition aid provides corrections to the antenna-predicted trajectory of the spacecraft to compensate for the variations that occur during the actual launch. The AAP software also provides the corrections to the antenna-predicted trajectory to the navigation team that uses the corrections to refine their model of the spacecraft in order to produce improved antenna-predicted trajectories for each spacecraft that passes over each complex. The software provides an automated Acquisition Aid receiver calibration, and provides graphical displays to the operator and remote viewers via an Ethernet connection. It has a Web server, and the remote workstations use the Firefox browser to view the displays. At any given time, only one operator can control any particular display in order to avoid conflicting commands from more than one control point. The configuration and control is accomplished solely via the graphical displays. The operator does not have to remember any commands. Only a few configuration parameters need to be changed, and can be saved to the appropriate spacecraft-dependent configuration file on the AAP s hard disk. AAP automates the calibration sequence by first commanding the antenna to the correct position, starting the receiver calibration sequence, and then providing the operator with the option of accepting or rejecting the new calibration parameters. If accepted, the new parameters are stored in the appropriate spacecraft-dependent configuration file. The calibration can be performed on the Sun, greatly expanding the window of opportunity for calibration. The spacecraft traditionally used for calibration is in view typically twice per day, and only for about ten minutes each pass.
Calibration Issues and Operating System Requirements for Electron-Probe Microanalysis
NASA Technical Reports Server (NTRS)
Carpenter, P.
2006-01-01
Instrument purchase requirements and dialogue with manufacturers have established hardware parameters for alignment, stability, and reproducibility, which have helped improve the precision and accuracy of electron microprobe analysis (EPMA). The development of correction algorithms and the accurate solution to quantitative analysis problems requires the minimization of systematic errors and relies on internally consistent data sets. Improved hardware and computer systems have resulted in better automation of vacuum systems, stage and wavelength-dispersive spectrometer (WDS) mechanisms, and x-ray detector systems which have improved instrument stability and precision. Improved software now allows extended automated runs involving diverse setups and better integrates digital imaging and quantitative analysis. However, instrumental performance is not regularly maintained, as WDS are aligned and calibrated during installation but few laboratories appear to check and maintain this calibration. In particular, detector deadtime (DT) data is typically assumed rather than measured, due primarily to the difficulty and inconvenience of the measurement process. This is a source of fundamental systematic error in many microprobe laboratories and is unknown to the analyst, as the magnitude of DT correction is not listed in output by microprobe operating systems. The analyst must remain vigilant to deviations in instrumental alignment and calibration, and microprobe system software must conveniently verify the necessary parameters. Microanalysis of mission critical materials requires an ongoing demonstration of instrumental calibration. Possible approaches to improvements in instrument calibration, quality control, and accuracy will be discussed. Development of a set of core requirements based on discussions with users, researchers, and manufacturers can yield documents that improve and unify the methods by which instruments can be calibrated. These results can be used to continue improvements of EPMA.
Automated calibration of multistatic arrays
DOE Office of Scientific and Technical Information (OSTI.GOV)
Henderer, Bruce
A method is disclosed for calibrating a multistatic array having a plurality of transmitter and receiver pairs spaced from one another along a predetermined path and relative to a plurality of bin locations, and further being spaced at a fixed distance from a stationary calibration implement. A clock reference pulse may be generated, and each of the transmitters and receivers of each said transmitter/receiver pair turned on at a monotonically increasing time delay interval relative to the clock reference pulse. Ones of the transmitters and receivers may be used such that a previously calibrated transmitter or receiver of a givenmore » one of the transmitter/receiver pairs is paired with a subsequently un-calibrated one of the transmitters or receivers of an immediately subsequently positioned transmitter/receiver pair, to calibrate the transmitter or receiver of the immediately subsequent transmitter/receiver pair.« less
Device for modular input high-speed multi-channel digitizing of electrical data
VanDeusen, A.L.; Crist, C.E.
1995-09-26
A multi-channel high-speed digitizer module converts a plurality of analog signals to digital signals (digitizing) and stores the signals in a memory device. The analog input channels are digitized simultaneously at high speed with a relatively large number of on-board memory data points per channel. The module provides an automated calibration based upon a single voltage reference source. Low signal noise at such a high density and sample rate is accomplished by ensuring the A/D converters are clocked at the same point in the noise cycle each time so that synchronous noise sampling occurs. This sampling process, in conjunction with an automated calibration, yields signal noise levels well below the noise level present on the analog reference voltages. 1 fig.
NASA Technical Reports Server (NTRS)
Oneill-Rood, Nora; Glover, Richard D.
1990-01-01
NASA's Dryden Flight Research Facility (Ames-Dryden), operates a diverse fleet of research aircraft which are heavily instrumented to provide both real time data for in-flight monitoring and recorded data for postflight analysis. Ames-Dryden's existing automated calibration (AUTOCAL) laboratory is a computerized facility which tests aircraft sensors to certify accuracy for anticipated harsh flight environments. Recently, a major AUTOCAL lab upgrade was initiated; the goal of this modernization is to enhance productivity and improve configuration management for both software and test data. The new system will have multiple testing stations employing distributed processing linked by a local area network to a centralized database. The baseline requirements for the new AUTOCAL lab and the design approach being taken for its mechanization are described.
Automated Iodine Monitoring System Development (AIMS). [shuttle prototype
NASA Technical Reports Server (NTRS)
1975-01-01
The operating principle of the automated iodine monitoring/controller system (AIMS) is described along with several design modifications. The iodine addition system is also discussed along with test setups and calibration; a facsimile of the optical/mechanical portion of the iodine monitor was fabricated and tested. The appendices include information on shuttle prototype AIMS, preliminary prime item development specifications, preliminary failure modes and effects analysis, and preliminary operating and maintenance instructions.
Automated data processing architecture for the Gemini Planet Imager Exoplanet Survey
NASA Astrophysics Data System (ADS)
Wang, Jason J.; Perrin, Marshall D.; Savransky, Dmitry; Arriaga, Pauline; Chilcote, Jeffrey K.; De Rosa, Robert J.; Millar-Blanchaer, Maxwell A.; Marois, Christian; Rameau, Julien; Wolff, Schuyler G.; Shapiro, Jacob; Ruffio, Jean-Baptiste; Maire, Jérôme; Marchis, Franck; Graham, James R.; Macintosh, Bruce; Ammons, S. Mark; Bailey, Vanessa P.; Barman, Travis S.; Bruzzone, Sebastian; Bulger, Joanna; Cotten, Tara; Doyon, René; Duchêne, Gaspard; Fitzgerald, Michael P.; Follette, Katherine B.; Goodsell, Stephen; Greenbaum, Alexandra Z.; Hibon, Pascale; Hung, Li-Wei; Ingraham, Patrick; Kalas, Paul; Konopacky, Quinn M.; Larkin, James E.; Marley, Mark S.; Metchev, Stanimir; Nielsen, Eric L.; Oppenheimer, Rebecca; Palmer, David W.; Patience, Jennifer; Poyneer, Lisa A.; Pueyo, Laurent; Rajan, Abhijith; Rantakyrö, Fredrik T.; Schneider, Adam C.; Sivaramakrishnan, Anand; Song, Inseok; Soummer, Remi; Thomas, Sandrine; Wallace, J. Kent; Ward-Duong, Kimberly; Wiktorowicz, Sloane J.
2018-01-01
The Gemini Planet Imager Exoplanet Survey (GPIES) is a multiyear direct imaging survey of 600 stars to discover and characterize young Jovian exoplanets and their environments. We have developed an automated data architecture to process and index all data related to the survey uniformly. An automated and flexible data processing framework, which we term the Data Cruncher, combines multiple data reduction pipelines (DRPs) together to process all spectroscopic, polarimetric, and calibration data taken with GPIES. With no human intervention, fully reduced and calibrated data products are available less than an hour after the data are taken to expedite follow up on potential objects of interest. The Data Cruncher can run on a supercomputer to reprocess all GPIES data in a single day as improvements are made to our DRPs. A backend MySQL database indexes all files, which are synced to the cloud, and a front-end web server allows for easy browsing of all files associated with GPIES. To help observers, quicklook displays show reduced data as they are processed in real time, and chatbots on Slack post observing information as well as reduced data products. Together, the GPIES automated data processing architecture reduces our workload, provides real-time data reduction, optimizes our observing strategy, and maintains a homogeneously reduced dataset to study planet occurrence and instrument performance.
Workcell calibration for effective offline programming
NASA Technical Reports Server (NTRS)
Stiles, Roger D.; Jones, Clyde S.
1989-01-01
In the application of graphics systems for off-line programming (OLP) of robotic systems, the inevitability of errors in the model representation of real-world situations requires that a method to map these differences is incorporated as an integral part of the overall system progamming procedures. This paper discusses several proven robot-to-positioner calibration techniques necessary to reflect real-world parameters in a work-cell model. Particular attention is given to the procedures used to adjust a graphics model to an acceptable degree of accuracy for integration of OLP for the Space Shuttle Main Engine welding automation. Consideration is given to the levels of calibration, requirements, special considerations for coordinated motion, and calibration procedures.
A 10 cm Dual Frequency Doppler Weather Radar. Part I. The Radar System.
1982-10-25
Evaluation System ( RAMCES )". The step attenuator required for this calibration can be programmed remotely, has low power and temperature coefficients, and...Control and Evaluation System". The Quality Assurance/Fault Location Network makes use of fault location techniques at critical locations in the radar and...quasi-con- tinuous monitoring of radar performance. The Radar Monitor, Control and Evaluation System provides for automated system calibration and
Improvement of an automated protein crystal exchange system PAM for high-throughput data collection
Hiraki, Masahiko; Yamada, Yusuke; Chavas, Leonard M. G.; Wakatsuki, Soichi; Matsugaki, Naohiro
2013-01-01
Photon Factory Automated Mounting system (PAM) protein crystal exchange systems are available at the following Photon Factory macromolecular beamlines: BL-1A, BL-5A, BL-17A, AR-NW12A and AR-NE3A. The beamline AR-NE3A has been constructed for high-throughput macromolecular crystallography and is dedicated to structure-based drug design. The PAM liquid-nitrogen Dewar can store a maximum of three SSRL cassettes. Therefore, users have to interrupt their experiments and replace the cassettes when using four or more of them during their beam time. As a result of investigation, four or more cassettes were used in AR-NE3A alone. For continuous automated data collection, the size of the liquid-nitrogen Dewar for the AR-NE3A PAM was increased, doubling the capacity. In order to check the calibration with the new Dewar and the cassette stand, calibration experiments were repeatedly performed. Compared with the current system, the parameters of the novel system are shown to be stable. PMID:24121334
Xie, Wei-Qi; Gong, Yi-Xian; Yu, Kong-Xian
2018-06-01
An automated and accurate headspace gas chromatographic (HS-GC) technique was investigated for rapidly quantifying water content in edible oils. In this method, multiple headspace extraction (MHE) procedures were used to analyse the integrated water content from the edible oil sample. A simple vapour phase calibration technique with an external vapour standard was used to calibrate both the water content in the gas phase and the total weight of water in edible oil sample. After that the water in edible oils can be quantified. The data showed that the relative standard deviation of the present HS-GC method in the precision test was less than 1.13%, the relative differences between the new method and a reference method (i.e. the oven-drying method) were no more than 1.62%. The present HS-GC method is automated, accurate, efficient, and can be a reliable tool for quantifying water content in edible oil related products and research. © 2017 Society of Chemical Industry. © 2017 Society of Chemical Industry.
Genetic Algorithm Calibration of Probabilistic Cellular Automata for Modeling Mining Permit Activity
Louis, S.J.; Raines, G.L.
2003-01-01
We use a genetic algorithm to calibrate a spatially and temporally resolved cellular automata to model mining activity on public land in Idaho and western Montana. The genetic algorithm searches through a space of transition rule parameters of a two dimensional cellular automata model to find rule parameters that fit observed mining activity data. Previous work by one of the authors in calibrating the cellular automaton took weeks - the genetic algorithm takes a day and produces rules leading to about the same (or better) fit to observed data. These preliminary results indicate that genetic algorithms are a viable tool in calibrating cellular automata for this application. Experience gained during the calibration of this cellular automata suggests that mineral resource information is a critical factor in the quality of the results. With automated calibration, further refinements of how the mineral-resource information is provided to the cellular automaton will probably improve our model.
Orbital Express AVGS Validation and Calibration for Automated Rendezvous
NASA Technical Reports Server (NTRS)
Heaton, Andrew F.; Howard, Richard T.; Pinson, Robin M.
2008-01-01
From March to July of 2007, the DARPA Orbital Express mission achieved a number of firsts in autonomous spacecraft operations. The NASA Advanced Video Guidance Sensor (AVGS) was the primary docking sensor during the first two dockings and was used in a blended mode three other automated captures. The AVGS performance exceeded its specification by approximately an order of magnitude. One reason that the AVGS functioned so well during the mission was that the validation and calibration of the sensor prior to the mission advanced the state-of-the-art for proximity sensors. Some factors in this success were improvements in ground test equipment and truth data, the capability for ILOAD corrections for optical and other effects, and the development of a bias correction procedure. Several valuable lessons learned have applications to future proximity sensors.
Automated Camera Array Fine Calibration
NASA Technical Reports Server (NTRS)
Clouse, Daniel; Padgett, Curtis; Ansar, Adnan; Cheng, Yang
2008-01-01
Using aerial imagery, the JPL FineCalibration (JPL FineCal) software automatically tunes a set of existing CAHVOR camera models for an array of cameras. The software finds matching features in the overlap region between images from adjacent cameras, and uses these features to refine the camera models. It is not necessary to take special imagery of a known target and no surveying is required. JPL FineCal was developed for use with an aerial, persistent surveillance platform.
Hand-Eye Calibration in Visually-Guided Robot Grinding.
Li, Wen-Long; Xie, He; Zhang, Gang; Yan, Si-Jie; Yin, Zhou-Ping
2016-11-01
Visually-guided robot grinding is a novel and promising automation technique for blade manufacturing. One common problem encountered in robot grinding is hand-eye calibration, which establishes the pose relationship between the end effector (hand) and the scanning sensor (eye). This paper proposes a new calibration approach for robot belt grinding. The main contribution of this paper is its consideration of both joint parameter errors and pose parameter errors in a hand-eye calibration equation. The objective function of the hand-eye calibration is built and solved, from which 30 compensated values (corresponding to 24 joint parameters and six pose parameters) are easily calculated in a closed solution. The proposed approach is economic and simple because only a criterion sphere is used to calculate the calibration parameters, avoiding the need for an expensive and complicated tracking process using a laser tracker. The effectiveness of this method is verified using a calibration experiment and a blade grinding experiment. The code used in this approach is attached in the Appendix.
An Open-Source Auto-Calibration Routine Supporting the Stormwater Management Model
NASA Astrophysics Data System (ADS)
Tiernan, E. D.; Hodges, B. R.
2017-12-01
The stormwater management model (SWMM) is a clustered model that relies on subcatchment-averaged parameter assignments to correctly capture catchment stormwater runoff behavior. Model calibration is considered a critical step for SWMM performance, an arduous task that most stormwater management designers undertake manually. This research presents an open-source, automated calibration routine that increases the efficiency and accuracy of the model calibration process. The routine makes use of a preliminary sensitivity analysis to reduce the dimensions of the parameter space, at which point a multi-objective function, genetic algorithm (modified Non-dominated Sorting Genetic Algorithm II) determines the Pareto front for the objective functions within the parameter space. The solutions on this Pareto front represent the optimized parameter value sets for the catchment behavior that could not have been reasonably obtained through manual calibration.
Cryogenic radiometers and intensity-stabilized lasers for Eos radiometric calibrations
NASA Technical Reports Server (NTRS)
Foukal, P.; Hoyt, C.; Jauniskis, L.
1991-01-01
Liquid helium-cooled electrical substitution radiometers (ESRs) provide irradiance standards with demonstrated absolute accuracy at the 0.01 percent level, spectrally flat response between the UV and IR, and sensitivity down to 0.1 nW/sq cm. We describe an automated system developed for NASA - Goddard Space Flight Center, consisting of a cryogenic ESR illuminated by servocontrolled laser beams. This system is designed to provide calibration of single-element and array detectors over the spectral range between 257nm in the UV to 10.6 microns in the IR. We also describe a cryogenic ESR optimized for black body calibrations that has been installed at NIST, and another that is under construction for calibrations of the CERES scanners planned for Eos.
NASA Technical Reports Server (NTRS)
Haney, Conor; Doeling, David; Minnis, Patrick; Bhatt, Rajendra; Scarino, Benjamin; Gopalan, Arun
2016-01-01
The Deep Space Climate Observatory (DSCOVR), launched on 11 February 2015, is a satellite positioned near the Lagrange-1 (L1) point, carrying several instruments that monitor space weather, and Earth-view sensors designed for climate studies. The Earth Polychromatic Imaging Camera (EPIC) onboard DSCOVR continuously views the sun-illuminated portion of the Earth with spectral coverage in the UV, VIS, and NIR bands. Although the EPIC instrument does not have any onboard calibration abilities, its constant view of the sunlit Earth disk provides a unique opportunity for simultaneous viewing with several other satellite instruments. This arrangement allows the EPIC sensor to be inter-calibrated using other well-characterized satellite instrument reference standards. Two such instruments with onboard calibration are MODIS, flown on Aqua and Terra, and VIIRS, onboard Suomi-NPP. The MODIS and VIIRS reference calibrations will be transferred to the EPIC instrument using both all-sky ocean and deep convective clouds (DCC) ray-matched EPIC and MODIS/VIIRS radiance pairs. An automated navigation correction routine was developed to more accurately align the EPIC and MODIS/VIIRS granules. The automated navigation correction routine dramatically reduced the uncertainty of the resulting calibration gain based on the EPIC and MODIS/VIIRS radiance pairs. The SCIAMACHY-based spectral band adjustment factors (SBAF) applied to the MODIS/ VIIRS radiances were found to successfully adjust the reference radiances to the spectral response of the specific EPIC channel for over-lapping spectral channels. The SBAF was also found to be effective for the non-overlapping EPIC channel 10. Lastly, both ray-matching techniques found no discernable trends for EPIC channel 7 over the year of publically released EPIC data.
Antonelli, Giorgia; Padoan, Andrea; Artusi, Carlo; Marinova, Mariela; Zaninotto, Martina; Plebani, Mario
2016-04-01
The aim of this study was to implement in our routine practice an automated saliva preparation protocol for quantification of cortisol (F) and cortisone (E) by LC-MS/MS using a liquid handling platform, maintaining the previously defined reference intervals with the manual preparation. Addition of internal standard solution to saliva samples and calibrators and SPE on μ-elution 96-well plate were performed by liquid handling platform. After extraction, the eluates were submitted to LC-MS/MS analysis. The manual steps within the entire process were to transfer saliva samples in suitable tubes, to put the cap mat and transfer of the collection plate to the LC auto sampler. Transference of the reference intervals from the manual to the automated procedure was established by Passing Bablok regression on 120 saliva samples analyzed simultaneously with the two procedures. Calibration curves were linear throughout the selected ranges. The imprecision ranged from 2 to 10%, with recoveries from 95 to 116%. Passing Bablok regression demonstrated no significant bias. The liquid handling platform translates the manual steps into automated operations allowing for saving hands-on time, while maintaining assay reproducibility and ensuring reliability of results, making it implementable in our routine with the previous established reference intervals. Copyright © 2015 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.
The automated data processing architecture for the GPI Exoplanet Survey
NASA Astrophysics Data System (ADS)
Wang, Jason J.; Perrin, Marshall D.; Savransky, Dmitry; Arriaga, Pauline; Chilcote, Jeffrey K.; De Rosa, Robert J.; Millar-Blanchaer, Maxwell A.; Marois, Christian; Rameau, Julien; Wolff, Schuyler G.; Shapiro, Jacob; Ruffio, Jean-Baptiste; Graham, James R.; Macintosh, Bruce
2017-09-01
The Gemini Planet Imager Exoplanet Survey (GPIES) is a multi-year direct imaging survey of 600 stars to discover and characterize young Jovian exoplanets and their environments. We have developed an automated data architecture to process and index all data related to the survey uniformly. An automated and flexible data processing framework, which we term the GPIES Data Cruncher, combines multiple data reduction pipelines together to intelligently process all spectroscopic, polarimetric, and calibration data taken with GPIES. With no human intervention, fully reduced and calibrated data products are available less than an hour after the data are taken to expedite follow-up on potential objects of interest. The Data Cruncher can run on a supercomputer to reprocess all GPIES data in a single day as improvements are made to our data reduction pipelines. A backend MySQL database indexes all files, which are synced to the cloud, and a front-end web server allows for easy browsing of all files associated with GPIES. To help observers, quicklook displays show reduced data as they are processed in real-time, and chatbots on Slack post observing information as well as reduced data products. Together, the GPIES automated data processing architecture reduces our workload, provides real-time data reduction, optimizes our observing strategy, and maintains a homogeneously reduced dataset to study planet occurrence and instrument performance.
Automation of a high-speed imaging setup for differential viscosity measurements
NASA Astrophysics Data System (ADS)
Hurth, C.; Duane, B.; Whitfield, D.; Smith, S.; Nordquist, A.; Zenhausern, F.
2013-12-01
We present the automation of a setup previously used to assess the viscosity of pleural effusion samples and discriminate between transudates and exudates, an important first step in clinical diagnostics. The presented automation includes the design, testing, and characterization of a vacuum-actuated loading station that handles the 2 mm glass spheres used as sensors, as well as the engineering of electronic Printed Circuit Board (PCB) incorporating a microcontroller and their synchronization with a commercial high-speed camera operating at 10 000 fps. The hereby work therefore focuses on the instrumentation-related automation efforts as the general method and clinical application have been reported earlier [Hurth et al., J. Appl. Phys. 110, 034701 (2011)]. In addition, we validate the performance of the automated setup with the calibration for viscosity measurements using water/glycerol standard solutions and the determination of the viscosity of an "unknown" solution of hydroxyethyl cellulose.
Automation of a high-speed imaging setup for differential viscosity measurements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hurth, C.; Duane, B.; Whitfield, D.
We present the automation of a setup previously used to assess the viscosity of pleural effusion samples and discriminate between transudates and exudates, an important first step in clinical diagnostics. The presented automation includes the design, testing, and characterization of a vacuum-actuated loading station that handles the 2 mm glass spheres used as sensors, as well as the engineering of electronic Printed Circuit Board (PCB) incorporating a microcontroller and their synchronization with a commercial high-speed camera operating at 10 000 fps. The hereby work therefore focuses on the instrumentation-related automation efforts as the general method and clinical application have beenmore » reported earlier [Hurth et al., J. Appl. Phys. 110, 034701 (2011)]. In addition, we validate the performance of the automated setup with the calibration for viscosity measurements using water/glycerol standard solutions and the determination of the viscosity of an “unknown” solution of hydroxyethyl cellulose.« less
Recent Research on the Automated Mass Measuring System
NASA Astrophysics Data System (ADS)
Yao, Hong; Ren, Xiao-Ping; Wang, Jian; Zhong, Rui-Lin; Ding, Jing-An
The research development of robotic measurement system as well as the representative automatic system were introduced in the paper, and then discussed a sub-multiple calibration scheme adopted on a fully-automatic CCR10 system effectively. Automatic robot system can be able to perform the dissemination of the mass scale without any manual intervention as well as the fast speed calibration of weight samples against a reference weight. At the last, evaluation of the expanded uncertainty was given out.
Winterfield, Craig; van de Voort, F R
2014-12-01
The Fluid Life Corporation assessed and implemented Fourier transform infrared spectroscopy (FTIR)-based methods using American Society for Testing and Materials (ASTM)-like stoichiometric reactions for determination of acid and base number for in-service mineral-based oils. The basic protocols, quality control procedures, calibration, validation, and performance of these new quantitative methods are assessed. ASTM correspondence is attained using a mixed-mode calibration, using primary reference standards to anchor the calibration, supplemented by representative sample lubricants analyzed by ASTM procedures. A partial least squares calibration is devised by combining primary acid/base reference standards and representative samples, focusing on the main spectral stoichiometric response with chemometrics assisting in accounting for matrix variability. FTIR(AN/BN) methodology is precise, accurate, and free of most interference that affects ASTM D664 and D4739 results. Extensive side-by-side operational runs produced normally distributed differences with mean differences close to zero and standard deviations of 0.18 and 0.26 mg KOH/g, respectively. Statistically, the FTIR methods are a direct match to the ASTM methods, with superior performance in terms of analytical throughput, preparation time, and solvent use. FTIR(AN/BN) analysis is a viable, significant advance for in-service lubricant analysis, providing an economic means of trending samples instead of tedious and expensive conventional ASTM(AN/BN) procedures. © 2014 Society for Laboratory Automation and Screening.
NASA Astrophysics Data System (ADS)
Richards, Joseph W.; Starr, Dan L.; Miller, Adam A.; Bloom, Joshua S.; Butler, Nathaniel R.; Brink, Henrik; Crellin-Quick, Arien
2012-12-01
With growing data volumes from synoptic surveys, astronomers necessarily must become more abstracted from the discovery and introspection processes. Given the scarcity of follow-up resources, there is a particularly sharp onus on the frameworks that replace these human roles to provide accurate and well-calibrated probabilistic classification catalogs. Such catalogs inform the subsequent follow-up, allowing consumers to optimize the selection of specific sources for further study and permitting rigorous treatment of classification purities and efficiencies for population studies. Here, we describe a process to produce a probabilistic classification catalog of variability with machine learning from a multi-epoch photometric survey. In addition to producing accurate classifications, we show how to estimate calibrated class probabilities and motivate the importance of probability calibration. We also introduce a methodology for feature-based anomaly detection, which allows discovery of objects in the survey that do not fit within the predefined class taxonomy. Finally, we apply these methods to sources observed by the All-Sky Automated Survey (ASAS), and release the Machine-learned ASAS Classification Catalog (MACC), a 28 class probabilistic classification catalog of 50,124 ASAS sources in the ASAS Catalog of Variable Stars. We estimate that MACC achieves a sub-20% classification error rate and demonstrate that the class posterior probabilities are reasonably calibrated. MACC classifications compare favorably to the classifications of several previous domain-specific ASAS papers and to the ASAS Catalog of Variable Stars, which had classified only 24% of those sources into one of 12 science classes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Richards, Joseph W.; Starr, Dan L.; Miller, Adam A.
2012-12-15
With growing data volumes from synoptic surveys, astronomers necessarily must become more abstracted from the discovery and introspection processes. Given the scarcity of follow-up resources, there is a particularly sharp onus on the frameworks that replace these human roles to provide accurate and well-calibrated probabilistic classification catalogs. Such catalogs inform the subsequent follow-up, allowing consumers to optimize the selection of specific sources for further study and permitting rigorous treatment of classification purities and efficiencies for population studies. Here, we describe a process to produce a probabilistic classification catalog of variability with machine learning from a multi-epoch photometric survey. In additionmore » to producing accurate classifications, we show how to estimate calibrated class probabilities and motivate the importance of probability calibration. We also introduce a methodology for feature-based anomaly detection, which allows discovery of objects in the survey that do not fit within the predefined class taxonomy. Finally, we apply these methods to sources observed by the All-Sky Automated Survey (ASAS), and release the Machine-learned ASAS Classification Catalog (MACC), a 28 class probabilistic classification catalog of 50,124 ASAS sources in the ASAS Catalog of Variable Stars. We estimate that MACC achieves a sub-20% classification error rate and demonstrate that the class posterior probabilities are reasonably calibrated. MACC classifications compare favorably to the classifications of several previous domain-specific ASAS papers and to the ASAS Catalog of Variable Stars, which had classified only 24% of those sources into one of 12 science classes.« less
Gulf of Mexico Climate-History Calibration Study
Spear, Jessica W.; Poore, Richard Z.
2010-01-01
Reliable instrumental records of past climate are available for about the last 150 years only. To supplement the instrumental record, reconstructions of past climate are made from natural recorders such as trees, ice, corals, and microfossils preserved in sediments. These proxy records provide information on the rate and magnitude of past climate variability, factors that are critical to distinguishing between natural and human-induced climate change in the present. However, the value of proxy records is heavily dependent on calibration between the chemistry of the natural recorder and of the modern environmental conditions. The Gulf of Mexico Climate and Environmental History Project is currently undertaking a climate-history calibration study with material collected from an automated sediment trap. The primary focus of the calibration study is to provide a better calibration of low-latitude environmental conditions and shell chemistry of calcareous microfossils, such as planktic Foraminifera.
NASA Technical Reports Server (NTRS)
Cibula, W. G.
1976-01-01
The techniques used for the automated classification of marshland vegetation and for the color-coded display of remotely acquired data to facilitate the control of mosquito breeding are presented. A multispectral scanner system and its mode of operation are described, and the computer processing techniques are discussed. The procedures for the selection of calibration sites are explained. Three methods for displaying color-coded classification data are presented.
NASA Astrophysics Data System (ADS)
Kasai, R.; Abe, T.; Sano, T.
Automated electromagnetic compatibility (EMC) tests for spacecraft hardware are described. EMC tests are divided into three categories: compensating measurement and calibration errors, comparison of test results with specification, and fine-frequency searching using predictive interference analysis. The automated system features an RF receiver and transmitter, a control system, and antennas. Trials are run with conducted and radiated emissions and conducted and radiated susceptibility over a frequency range of 0.1-40 GHz with narrow, broad and random broad band noise. The system meets military specifications 1541, 461, and 462.
Wu, Y.; Liu, S.
2012-01-01
Parameter optimization and uncertainty issues are a great challenge for the application of large environmental models like the Soil and Water Assessment Tool (SWAT), which is a physically-based hydrological model for simulating water and nutrient cycles at the watershed scale. In this study, we present a comprehensive modeling environment for SWAT, including automated calibration, and sensitivity and uncertainty analysis capabilities through integration with the R package Flexible Modeling Environment (FME). To address challenges (e.g., calling the model in R and transferring variables between Fortran and R) in developing such a two-language coupling framework, 1) we converted the Fortran-based SWAT model to an R function (R-SWAT) using the RFortran platform, and alternatively 2) we compiled SWAT as a Dynamic Link Library (DLL). We then wrapped SWAT (via R-SWAT) with FME to perform complex applications including parameter identifiability, inverse modeling, and sensitivity and uncertainty analysis in the R environment. The final R-SWAT-FME framework has the following key functionalities: automatic initialization of R, running Fortran-based SWAT and R commands in parallel, transferring parameters and model output between SWAT and R, and inverse modeling with visualization. To examine this framework and demonstrate how it works, a case study simulating streamflow in the Cedar River Basin in Iowa in the United Sates was used, and we compared it with the built-in auto-calibration tool of SWAT in parameter optimization. Results indicate that both methods performed well and similarly in searching a set of optimal parameters. Nonetheless, the R-SWAT-FME is more attractive due to its instant visualization, and potential to take advantage of other R packages (e.g., inverse modeling and statistical graphics). The methods presented in the paper are readily adaptable to other model applications that require capability for automated calibration, and sensitivity and uncertainty analysis.
ROx3: Retinal oximetry utilizing the blue-green oximetry method
NASA Astrophysics Data System (ADS)
Parsons, Jennifer Kathleen Hendryx
The ROx is a retinal oximeter under development with the purpose of non-invasively and accurately measuring oxygen saturation (SO2) in vivo. It is novel in that it utilizes the blue-green oximetry technique with on-axis illumination. ROx calibration tests were performed by inducing hypoxia in live anesthetized swine and comparing ROx measurements to SO 2 values measured by a CO-Oximeter. Calibration was not achieved to the precision required for clinical use, but limiting factors were identified and improved. The ROx was used in a set of sepsis experiments on live pigs with the intention of tracking retinal SO2 during the development of sepsis. Though conclusions are qualitative due to insufficient calibration of the device, retinal venous SO2 is shown to trend generally with central venous SO2 as sepsis develops. The novel sepsis model developed in these experiments is also described. The method of cecal ligation and perforation with additional soiling of the abdomen consistently produced controllable severe sepsis/septic shock in a matter of hours. In addition, the ROx was used to collect retinal images from a healthy human volunteer. These experiments served as a bench test for several of the additions/modifications made to the ROx. This set of experiments specifically served to illuminate problems with various light paths and image acquisition. The analysis procedure for the ROx is under development, particularly automating the process for consistency, accuracy, and time efficiency. The current stage of automation is explained, including data acquisition processes and the automated vessel fit routine. Suggestions for the next generation of device minimization are also described.
Automatically calibrating admittances in KATE's autonomous launch operations model
NASA Technical Reports Server (NTRS)
Morgan, Steve
1992-01-01
This report documents a 1000-line Symbolics LISP program that automatically calibrates all 15 fluid admittances in KATE's Autonomous Launch Operations (ALO) model. (KATE is Kennedy Space Center's Knowledge-based Autonomous Test Engineer, a diagnosis and repair expert system created for use on the Space Shuttle's various fluid flow systems.) As a new KATE application, the calibrator described here breaks new ground for KSC's Artificial Intelligence Lab by allowing KATE to both control and measure the hardware she supervises. By automating a formerly manual process, the calibrator: (1) saves the ALO model builder untold amounts of labor; (2) enables quick repairs after workmen accidently adjust ALO's hand valves; and (3) frees the modeler to pursue new KATE applications that previously were too complicated. Also reported are suggestions for enhancing the program: (1) to calibrate ALO's TV cameras, pumps, and sensor tolerances; and (2) to calibrate devices in other KATE models, such as the shuttle's LOX and Environment Control System (ECS).
Horrey, William J; Lesch, Mary F; Mitsopoulos-Rubens, Eve; Lee, John D
2015-03-01
Humans often make inflated or erroneous estimates of their own ability or performance. Such errors in calibration can be due to incomplete processing, neglect of available information or due to improper weighing or integration of the information and can impact our decision-making, risk tolerance, and behaviors. In the driving context, these outcomes can have important implications for safety. The current paper discusses the notion of calibration in the context of self-appraisals and self-competence as well as in models of self-regulation in driving. We further develop a conceptual framework for calibration in the driving context borrowing from earlier models of momentary demand regulation, information processing, and lens models for information selection and utilization. Finally, using the model we describe the implications for calibration (or, more specifically, errors in calibration) for our understanding of driver distraction, in-vehicle automation and autonomous vehicles, and the training of novice and inexperienced drivers. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.
2012-02-01
parameter estimation method, but rather to carefully describe how to use the ERDC software implementation of MLSL that accommodates the PEST model...model independent LM method based parameter estimation software PEST (Doherty, 2004, 2007a, 2007b), which quantifies model to measure- ment misfit...et al. (2011) focused on one drawback associated with LM-based model independent parameter estimation as implemented in PEST ; viz., that it requires
Microwave Interferometry (90 GHz) for Hall Thruster Plume Density Characterization
2005-06-01
Hall thruster . The interferometer has been modified to overcome initial difficulties encountered during the preliminary testing. The modifications include the ability to perform remote and automated calibrations as well as an aluminum enclosure to shield the interferometer from the Hall thruster plume. With these modifications, it will be possible to make unambiguous electron density measurements of the thruster plume as well as to rapidly and automatically calibrate the interferometer to eliminate the effects of signal drift. Due to the versatility
Elixir - how to handle 2 trillion pixels
NASA Astrophysics Data System (ADS)
Magnier, Eugene A.; Cuillandre, Jean-Charles
2002-12-01
The Elixir system at CFHT provides automatic data quality assurance and calibration for the wide-field mosaic imager camera CFH12K. Elixir consists of a variety of tools, including: a real-time analysis suite which runs at the telescope to provide quick feedback to the observers; a detailed analysis of the calibration data; and an automated pipeline for processing data to be distributed to observers. To date, 2.4 × 1012 night-time sky pixels from CFH12K have been processed by the Elixir system.
Delahanty, Ryan J; Kaufman, David; Jones, Spencer S
2018-06-01
Risk adjustment algorithms for ICU mortality are necessary for measuring and improving ICU performance. Existing risk adjustment algorithms are not widely adopted. Key barriers to adoption include licensing and implementation costs as well as labor costs associated with human-intensive data collection. Widespread adoption of electronic health records makes automated risk adjustment feasible. Using modern machine learning methods and open source tools, we developed and evaluated a retrospective risk adjustment algorithm for in-hospital mortality among ICU patients. The Risk of Inpatient Death score can be fully automated and is reliant upon data elements that are generated in the course of usual hospital processes. One hundred thirty-one ICUs in 53 hospitals operated by Tenet Healthcare. A cohort of 237,173 ICU patients discharged between January 2014 and December 2016. The data were randomly split into training (36 hospitals), and validation (17 hospitals) data sets. Feature selection and model training were carried out using the training set while the discrimination, calibration, and accuracy of the model were assessed in the validation data set. Model discrimination was evaluated based on the area under receiver operating characteristic curve; accuracy and calibration were assessed via adjusted Brier scores and visual analysis of calibration curves. Seventeen features, including a mix of clinical and administrative data elements, were retained in the final model. The Risk of Inpatient Death score demonstrated excellent discrimination (area under receiver operating characteristic curve = 0.94) and calibration (adjusted Brier score = 52.8%) in the validation dataset; these results compare favorably to the published performance statistics for the most commonly used mortality risk adjustment algorithms. Low adoption of ICU mortality risk adjustment algorithms impedes progress toward increasing the value of the healthcare delivered in ICUs. The Risk of Inpatient Death score has many attractive attributes that address the key barriers to adoption of ICU risk adjustment algorithms and performs comparably to existing human-intensive algorithms. Automated risk adjustment algorithms have the potential to obviate known barriers to adoption such as cost-prohibitive licensing fees and significant direct labor costs. Further evaluation is needed to ensure that the level of performance observed in this study could be achieved at independent sites.
Single-Vector Calibration of Wind-Tunnel Force Balances
NASA Technical Reports Server (NTRS)
Parker, P. A.; DeLoach, R.
2003-01-01
An improved method of calibrating a wind-tunnel force balance involves the use of a unique load application system integrated with formal experimental design methodology. The Single-Vector Force Balance Calibration System (SVS) overcomes the productivity and accuracy limitations of prior calibration methods. A force balance is a complex structural spring element instrumented with strain gauges for measuring three orthogonal components of aerodynamic force (normal, axial, and side force) and three orthogonal components of aerodynamic torque (rolling, pitching, and yawing moments). Force balances remain as the state-of-the-art instrument that provide these measurements on a scale model of an aircraft during wind tunnel testing. Ideally, each electrical channel of the balance would respond only to its respective component of load, and it would have no response to other components of load. This is not entirely possible even though balance designs are optimized to minimize these undesirable interaction effects. Ultimately, a calibration experiment is performed to obtain the necessary data to generate a mathematical model and determine the force measurement accuracy. In order to set the independent variables of applied load for the calibration 24 NASA Tech Briefs, October 2003 experiment, a high-precision mechanical system is required. Manual deadweight systems have been in use at Langley Research Center (LaRC) since the 1940s. These simple methodologies produce high confidence results, but the process is mechanically complex and labor-intensive, requiring three to four weeks to complete. Over the past decade, automated balance calibration systems have been developed. In general, these systems were designed to automate the tedious manual calibration process resulting in an even more complex system which deteriorates load application quality. The current calibration approach relies on a one-factor-at-a-time (OFAT) methodology, where each independent variable is incremented individually throughout its full-scale range, while all other variables are held at a constant magnitude. This OFAT approach has been widely accepted because of its inherent simplicity and intuitive appeal to the balance engineer. LaRC has been conducting research in a "modern design of experiments" (MDOE) approach to force balance calibration. Formal experimental design techniques provide an integrated view to the entire calibration process covering all three major aspects of an experiment; the design of the experiment, the execution of the experiment, and the statistical analyses of the data. In order to overcome the weaknesses in the available mechanical systems and to apply formal experimental techniques, a new mechanical system was required. The SVS enables the complete calibration of a six-component force balance with a series of single force vectors.
Kerger, Heinz; Groth, Gesine; Kalenka, Armin; Vajkoczy, Peter; Tsai, Amy G; Intaglietta, Marcos
2003-01-01
An automated system for pO(2) analysis based upon phosphorescence quenching was tested. The system was calibrated in vitro with capillary samples of saline and blood. Results were compared to a conventional measuring procedure wherein pO(2) was calculated off-line by computer fitting of phosphorescence decay signals. PO(2) measurements obtained by the automated system were correlated (r(2) = 0.98) with readings simultaneously generated by a blood gas analyzer, accuracy being highest in the low (0-20 mm Hg) and medium pO(2) ranges (21-70 mm Hg). Measurements in in vivo studies in the hamster skin-fold preparation were similar to previously reported results. The automated system fits the phosphorescence decay data to a single exponential and allows repeated pO(2) measurements in rapid sequence.
Merritt, Stephanie M; Heimbaugh, Heather; LaChapell, Jennifer; Lee, Deborah
2013-06-01
This study is the first to examine the influence of implicit attitudes toward automation on users' trust in automation. Past empirical work has examined explicit (conscious) influences on user level of trust in automation but has not yet measured implicit influences. We examine concurrent effects of explicit propensity to trust machines and implicit attitudes toward automation on trust in an automated system. We examine differential impacts of each under varying automation performance conditions (clearly good, ambiguous, clearly poor). Participants completed both a self-report measure of propensity to trust and an Implicit Association Test measuring implicit attitude toward automation, then performed an X-ray screening task. Automation performance was manipulated within-subjects by varying the number and obviousness of errors. Explicit propensity to trust and implicit attitude toward automation did not significantly correlate. When the automation's performance was ambiguous, implicit attitude significantly affected automation trust, and its relationship with propensity to trust was additive: Increments in either were related to increases in trust. When errors were obvious, a significant interaction between the implicit and explicit measures was found, with those high in both having higher trust. Implicit attitudes have important implications for automation trust. Users may not be able to accurately report why they experience a given level of trust. To understand why users trust or fail to trust automation, measurements of implicit and explicit predictors may be necessary. Furthermore, implicit attitude toward automation might be used as a lever to effectively calibrate trust.
Satellite Calibration With LED Detectors at Mud Lake
NASA Technical Reports Server (NTRS)
Hiller, Jonathan D.
2005-01-01
Earth-monitoring instruments in orbit must be routinely calibrated in order to accurately analyze the data obtained. By comparing radiometric measurements taken on the ground in conjunction with a satellite overpass, calibration curves are derived for an orbiting instrument. A permanent, automated facility is planned for Mud Lake, Nevada (a large, homogeneous, dry lakebed) for this purpose. Because some orbiting instruments have low resolution (250 meters per pixel), inexpensive radiometers using LEDs as sensors are being developed to array widely over the lakebed. LEDs are ideal because they are inexpensive, reliable, and sense over a narrow bandwidth. By obtaining and averaging widespread data, errors are reduced and long-term surface changes can be more accurately observed.
A melting-point-of gallium apparatus for thermometer calibration.
Sostman, H E; Manley, K A
1978-08-01
We have investigated the equilibrium melting point of gallium as a temperature fixed-point at which to calibrate small thermistor thermometers, such as those used to measure temperature in enzyme reaction analysis and other temperature-dependent biological assays. We have determined that the melting temperature of "6N" (99.999% pure) gallium is 29.770 +/- 0.002 degrees C, and that the constant-temperature plateau can be prolonged for several hours. We have designed a simple automated apparatus that exploits this phenomenon and that permits routine calibration verification of thermistor temperature probes throughout the laboratory day. We describe the physics of the gallium melt, and the design and use of the apparatus.
Calibrating Charisma: The many-facet Rasch model for leader measurement and automated coaching
NASA Astrophysics Data System (ADS)
Barney, Matt
2016-11-01
No one is a leader unless others follow. Consequently, leadership is fundamentally a social judgment construct, and may be best measured via a Many Facet Rasch Model designed for this purpose. Uniquely, the MFRM allows for objective, accurate and precise estimation of leader attributes, along with identification of rater biases and other distortions of the available information. This presentation will outline a mobile computer-adaptive measurement system that measures and develops charisma, among others. Uniquely, the approach calibrates and mass-personalizes artificially intelligent, Rasch-calibrated electronic coaching that is neither too hard nor too easy but “just right” to help each unique leader develop improved charisma.
System for Automated Calibration of Vector Modulators
NASA Technical Reports Server (NTRS)
Lux, James; Boas, Amy; Li, Samuel
2009-01-01
Vector modulators are used to impose baseband modulation on RF signals, but non-ideal behavior limits the overall performance. The non-ideal behavior of the vector modulator is compensated using data collected with the use of an automated test system driven by a LabVIEW program that systematically applies thousands of control-signal values to the device under test and collects RF measurement data. The technology innovation automates several steps in the process. First, an automated test system, using computer controlled digital-to-analog converters (DACs) and a computer-controlled vector network analyzer (VNA) systematically can apply different I and Q signals (which represent the complex number by which the RF signal is multiplied) to the vector modulator under test (VMUT), while measuring the RF performance specifically, gain and phase. The automated test system uses the LabVIEW software to control the test equipment, collect the data, and write it to a file. The input to the Lab - VIEW program is either user-input for systematic variation, or is provided in a file containing specific test values that should be fed to the VMUT. The output file contains both the control signals and the measured data. The second step is to post-process the file to determine the correction functions as needed. The result of the entire process is a tabular representation, which allows translation of a desired I/Q value to the required analog control signals to produce a particular RF behavior. In some applications, corrected performance is needed only for a limited range. If the vector modulator is being used as a phase shifter, there is only a need to correct I and Q values that represent points on a circle, not the entire plane. This innovation has been used to calibrate 2-GHz MMIC (monolithic microwave integrated circuit) vector modulators in the High EIRP Cluster Array project (EIRP is high effective isotropic radiated power). These calibrations were then used to create correction tables to allow the commanding of the phase shift in each of four channels used as a phased array for beam steering of a Ka-band (32-GHz) signal. The system also was the basis of a breadboard electronic beam steering system. In this breadboard, the goal was not to make systematic measurements of the properties of a vector modulator, but to drive the breadboard with a series of test patterns varying in phase and amplitude. This is essentially the same calibration process, but with the difference that the data collection process is oriented toward collecting breadboard performance, rather than the measurement of output from a network analyzer.
Amelioration de la precision d'un bras robotise pour une application d'ebavurage
NASA Astrophysics Data System (ADS)
Mailhot, David
Process automation is a more and more referred solution when it comes to complex, tedious or even dangerous tasks for human. Flexibility, low cost and compactness make industrial robots very attractive for automation. Even if many developments have been made to enhance robot's performances, they still can not meet some industries requirements. For instance, aerospace industry requires very tight tolerances on a large variety of parts, which is not what robots were designed for at first. When it comes to robotic deburring, robot imprecision is a major problem that needs to be addressed before it can be implemented in production. This master's thesis explores different calibration techniques for robot's dimensions that could overcome the problem and make the robotic deburring application possible. Some calibration techniques that are easy to implement in production environment are simulated and compared. A calibration technique for tool's dimensions is simulated and implemented to evaluate its potential. The most efficient technique will be used within the application. Finally, the production environment and requirements are explained. The remaining imprecision will be compensated by the use of a force/torque sensor integrated with the robot's controller and by the use of a camera. Many tests are made to define the best parameters to use to deburr a specific feature on a chosen part. Concluding tests are shown and demonstrate the potential use of robotic deburring. Keywords: robotic calibration, robotic arm, robotic precision, robotic deburring
Atmospheric stellar parameters from cross-correlation functions
NASA Astrophysics Data System (ADS)
Malavolta, L.; Lovis, C.; Pepe, F.; Sneden, C.; Udry, S.
2017-08-01
The increasing number of spectra gathered by spectroscopic sky surveys and transiting exoplanet follow-up has pushed the community to develop automated tools for atmospheric stellar parameters determination. Here we present a novel approach that allows the measurement of temperature (Teff), metallicity ([Fe/H]) and gravity (log g) within a few seconds and in a completely automated fashion. Rather than performing comparisons with spectral libraries, our technique is based on the determination of several cross-correlation functions (CCFs) obtained by including spectral features with different sensitivity to the photospheric parameters. We use literature stellar parameters of high signal-to-noise (SNR), high-resolution HARPS spectra of FGK main-sequence stars to calibrate Teff, [Fe/H] and log g as a function of CCF parameters. Our technique is validated using low-SNR spectra obtained with the same instrument. For FGK stars we achieve a precision of σ _{{T_eff}} = 50 K, σlog g = 0.09 dex and σ _{{{[Fe/H]}}} =0.035 dex at SNR = 50, while the precision for observation with SNR ≳ 100 and the overall accuracy are constrained by the literature values used to calibrate the CCFs. Our approach can easily be extended to other instruments with similar spectral range and resolution or to other spectral range and stars other than FGK dwarfs if a large sample of reference stars is available for the calibration. Additionally, we provide the mathematical formulation to convert synthetic equivalent widths to CCF parameters as an alternative to direct calibration. We have made our tool publicly available.
Mwashote, B.M.; Burnett, W.C.; Chanton, J.; Santos, I.R.; Dimova, N.; Swarzenski, P.W.
2010-01-01
Submarine groundwater discharge (SGD) assessments were conducted both in the laboratory and at a field site in the northeastern Gulf of Mexico, using a continuous heat-type automated seepage meter (seepmeter). The functioning of the seepmeter is based on measurements of a temperature gradient in the water between downstream and upstream positions in its flow pipe. The device has the potential of providing long-term, high-resolution measurements of SGD. Using a simple inexpensive laboratory set-up, we have shown that connecting an extension cable to the seepmeter has a negligible effect on its measuring capability. Similarly, the observed influence of very low temperature (???3 ??C) on seepmeter measurements can be accounted for by conducting calibrations at such temperatures prior to field deployments. Compared to manual volumetric measurements, calibration experiments showed that at higher water flow rates (>28 cm day-1 or cm3 cm-2 day-1) an analog flowmeter overestimated flow rates by ???7%. This was apparently due to flow resistance, turbulence and formation of air bubbles in the seepmeter water flow tubes. Salinity had no significant effect on the performance of the seepmeter. Calibration results from fresh water and sea water showed close agreement at a 95% confidence level significance between the data sets from the two media (R2 = 0.98). Comparatively, the seepmeter SGD measurements provided data that are comparable to manually-operated seepage meters, the radon geochemical tracer approach, and an electromagnetic (EM) seepage meter. ?? 2009 Elsevier Ltd.
Automatic scoring of dicentric chromosomes as a tool in large scale radiation accidents.
Romm, H; Ainsbury, E; Barnard, S; Barrios, L; Barquinero, J F; Beinke, C; Deperas, M; Gregoire, E; Koivistoinen, A; Lindholm, C; Moquet, J; Oestreicher, U; Puig, R; Rothkamm, K; Sommer, S; Thierens, H; Vandersickel, V; Vral, A; Wojcik, A
2013-08-30
Mass casualty scenarios of radiation exposure require high throughput biological dosimetry techniques for population triage in order to rapidly identify individuals who require clinical treatment. The manual dicentric assay is a highly suitable technique, but it is also very time consuming and requires well trained scorers. In the framework of the MULTIBIODOSE EU FP7 project, semi-automated dicentric scoring has been established in six European biodosimetry laboratories. Whole blood was irradiated with a Co-60 gamma source resulting in 8 different doses between 0 and 4.5Gy and then shipped to the six participating laboratories. To investigate two different scoring strategies, cell cultures were set up with short term (2-3h) or long term (24h) colcemid treatment. Three classifiers for automatic dicentric detection were applied, two of which were developed specifically for these two different culture techniques. The automation procedure included metaphase finding, capture of cells at high resolution and detection of dicentric candidates. The automatically detected dicentric candidates were then evaluated by a trained human scorer, which led to the term 'semi-automated' being applied to the analysis. The six participating laboratories established at least one semi-automated calibration curve each, using the appropriate classifier for their colcemid treatment time. There was no significant difference between the calibration curves established, regardless of the classifier used. The ratio of false positive to true positive dicentric candidates was dose dependent. The total staff effort required for analysing 150 metaphases using the semi-automated approach was 2 min as opposed to 60 min for manual scoring of 50 metaphases. Semi-automated dicentric scoring is a useful tool in a large scale radiation accident as it enables high throughput screening of samples for fast triage of potentially exposed individuals. Furthermore, the results from the participating laboratories were comparable which supports networking between laboratories for this assay. Copyright © 2013 Elsevier B.V. All rights reserved.
Automated model optimisation using the Cylc workflow engine (Cyclops v1.0)
NASA Astrophysics Data System (ADS)
Gorman, Richard M.; Oliver, Hilary J.
2018-06-01
Most geophysical models include many parameters that are not fully determined by theory, and can be tuned
to improve the model's agreement with available data. We might attempt to automate this tuning process in an objective way by employing an optimisation algorithm to find the set of parameters that minimises a cost function derived from comparing model outputs with measurements. A number of algorithms are available for solving optimisation problems, in various programming languages, but interfacing such software to a complex geophysical model simulation presents certain challenges. To tackle this problem, we have developed an optimisation suite (Cyclops
) based on the Cylc workflow engine that implements a wide selection of optimisation algorithms from the NLopt Python toolbox (Johnson, 2014). The Cyclops optimisation suite can be used to calibrate any modelling system that has itself been implemented as a (separate) Cylc model suite, provided it includes computation and output of the desired scalar cost function. A growing number of institutions are using Cylc to orchestrate complex distributed suites of interdependent cycling tasks within their operational forecast systems, and in such cases application of the optimisation suite is particularly straightforward. As a test case, we applied the Cyclops to calibrate a global implementation of the WAVEWATCH III (v4.18) third-generation spectral wave model, forced by ERA-Interim input fields. This was calibrated over a 1-year period (1997), before applying the calibrated model to a full (1979-2016) wave hindcast. The chosen error metric was the spatial average of the root mean square error of hindcast significant wave height compared with collocated altimeter records. We describe the results of a calibration in which up to 19 parameters were optimised.
NASA Astrophysics Data System (ADS)
Deutscher, N. M.; Griffith, D. W. T.; Bryant, G. W.; Wennberg, P. O.; Toon, G. C.; Washenfelder, R. A.; Keppel-Aleks, G.; Wunch, D.; Yavin, Y.; Allen, N. T.; Blavier, J.-F.; Jiménez, R.; Daube, B. C.; Bright, A. V.; Matross, D. M.; Wofsy, S. C.; Park, S.
2010-03-01
An automated Fourier Transform Spectroscopic (FTS) solar observatory was established in Darwin, Australia in August 2005. The laboratory is part of the Total Carbon Column Observing Network, and measures atmospheric column abundances of CO2 and O2 and other gases. Measured CO2 columns were calibrated against integrated aircraft profiles obtained during the TWP-ICE campaign in January-February 2006, and show good agreement with calibrations for a similar instrument in Park Falls, Wisconsin. A clear-sky low airmass relative precision of 0.1% is demonstrated in the CO2 and O2 retrieved column-averaged volume mixing ratios. The 1% negative bias in the FTS XCO2 relative to the World Meteorological Organization (WMO) calibrated in situ scale is within the uncertainties of the NIR spectroscopy and analysis.
NASA Astrophysics Data System (ADS)
Deutscher, N. M.; Griffith, D. W. T.; Bryant, G. W.; Wennberg, P. O.; Toon, G. C.; Washenfelder, R. A.; Keppel-Aleks, G.; Wunch, D.; Yavin, Y.; Allen, N. T.; Blavier, J.-F.; Jiménez, R.; Daube, B. C.; Bright, A. V.; Matross, D. M.; Wofsy, S. C.; Park, S.
2010-07-01
An automated Fourier Transform Spectroscopic (FTS) solar observatory was established in Darwin, Australia in August 2005. The laboratory is part of the Total Carbon Column Observing Network, and measures atmospheric column abundances of CO2 and O2 and other gases. Measured CO2 columns were calibrated against integrated aircraft profiles obtained during the TWP-ICE campaign in January-February 2006, and show good agreement with calibrations for a similar instrument in Park Falls, Wisconsin. A clear-sky low airmass relative precision of 0.1% is demonstrated in the CO2 and O2 retrieved column-averaged volume mixing ratios. The 1% negative bias in the FTS XCO2 relative to the World Meteorological Organization (WMO) calibrated in situ scale is within the uncertainties of the NIR spectroscopy and analysis.
Automatic Phase Calibration for RF Cavities using Beam-Loading Signals
DOE Office of Scientific and Technical Information (OSTI.GOV)
Edelen, J. P.; Chase, B. E.
Precise calibration of the cavity phase signals is necessary for the operation of any particle accelerator. For many systems this requires human in the loop adjustments based on measurements of the beam parameters downstream. Some recent work has developed a scheme for the calibration of the cavity phase using beam measurements and beam-loading however this scheme is still a multi-step process that requires heavy automation or human in the loop. In this paper we analyze a new scheme that uses only RF signals reacting to beam-loading to calculate the phase of the beam relative to the cavity. This technique couldmore » be used in slow control loops to provide real-time adjustment of the cavity phase calibration without human intervention thereby increasing the stability and reliability of the accelerator.« less
SERPent: Automated reduction and RFI-mitigation software for e-MERLIN
NASA Astrophysics Data System (ADS)
Peck, Luke W.; Fenech, Danielle M.
2013-08-01
The Scripted E-merlin Rfi-mitigation PipelinE for iNTerferometry (SERPent) is an automated reduction and RFI-mitigation procedure utilising the SumThreshold methodology (Offringa et al., 2010a), originally developed for the LOFAR pipeline. SERPent is written in the Parseltongue language enabling interaction with the Astronomical Image Processing Software (AIPS) program. Moreover, SERPent is a simple 'out of the box' Python script, which is easy to set up and is free of compilers. In addition to the flagging of RFI affected visibilities, the script also flags antenna zero-amplitude dropouts and Lovell telescope phase calibrator stationary scans inherent to the e-MERLIN system. Both the flagging and computational performances of SERPent are presented here, for e-MERLIN commissioning datasets for both L-band (1.3-1.8 GHz) and C-band (4-8 GHz) observations. RFI typically amounts to <20%-25% for the more problematic L-band observations and <5% for the generally RFI quieter C-band. The level of RFI detection and flagging is more accurate and delicate than visual manual flagging, with the output immediately ready for AIPS calibration. SERPent is fully parallelised and has been tested on a range of computing systems. The current flagging rate is at 110 GB day-1 on a 'high-end' computer (16 CPUs, 100 GB memory) which amounts to ˜6.9 GB CPU-1 day-1, with an expected increase in performance when e-MERLIN has completed its commissioning. The refining of automated reduction and calibration procedures is essential for the e-MERLIN legacy projects and future interferometers such as the SKA and the associated pathfinders (MeerKAT and ASKAP), where the vast data sizes (>TB) make traditional astronomer interactions unfeasible.
NASA Astrophysics Data System (ADS)
Hayley, Kevin; Schumacher, J.; MacMillan, G. J.; Boutin, L. C.
2014-05-01
Expanding groundwater datasets collected by automated sensors, and improved groundwater databases, have caused a rapid increase in calibration data available for groundwater modeling projects. Improved methods of subsurface characterization have increased the need for model complexity to represent geological and hydrogeological interpretations. The larger calibration datasets and the need for meaningful predictive uncertainty analysis have both increased the degree of parameterization necessary during model calibration. Due to these competing demands, modern groundwater modeling efforts require a massive degree of parallelization in order to remain computationally tractable. A methodology for the calibration of highly parameterized, computationally expensive models using the Amazon EC2 cloud computing service is presented. The calibration of a regional-scale model of groundwater flow in Alberta, Canada, is provided as an example. The model covers a 30,865-km2 domain and includes 28 hydrostratigraphic units. Aquifer properties were calibrated to more than 1,500 static hydraulic head measurements and 10 years of measurements during industrial groundwater use. Three regionally extensive aquifers were parameterized (with spatially variable hydraulic conductivity fields), as was the aerial recharge boundary condition, leading to 450 adjustable parameters in total. The PEST-based model calibration was parallelized on up to 250 computing nodes located on Amazon's EC2 servers.
Sin, Gürkan; Van Hulle, Stijn W H; De Pauw, Dirk J W; van Griensven, Ann; Vanrolleghem, Peter A
2005-07-01
Modelling activated sludge systems has gained an increasing momentum after the introduction of activated sludge models (ASMs) in 1987. Application of dynamic models for full-scale systems requires essentially a calibration of the chosen ASM to the case under study. Numerous full-scale model applications have been performed so far which were mostly based on ad hoc approaches and expert knowledge. Further, each modelling study has followed a different calibration approach: e.g. different influent wastewater characterization methods, different kinetic parameter estimation methods, different selection of parameters to be calibrated, different priorities within the calibration steps, etc. In short, there was no standard approach in performing the calibration study, which makes it difficult, if not impossible, to (1) compare different calibrations of ASMs with each other and (2) perform internal quality checks for each calibration study. To address these concerns, systematic calibration protocols have recently been proposed to bring guidance to the modeling of activated sludge systems and in particular to the calibration of full-scale models. In this contribution four existing calibration approaches (BIOMATH, HSG, STOWA and WERF) will be critically discussed using a SWOT (Strengths, Weaknesses, Opportunities, Threats) analysis. It will also be assessed in what way these approaches can be further developed in view of further improving the quality of ASM calibration. In this respect, the potential of automating some steps of the calibration procedure by use of mathematical algorithms is highlighted.
Development of Fully Automated Low-Cost Immunoassay System for Research Applications.
Wang, Guochun; Das, Champak; Ledden, Bradley; Sun, Qian; Nguyen, Chien
2017-10-01
Enzyme-linked immunosorbent assay (ELISA) automation for routine operation in a small research environment would be very attractive. A portable fully automated low-cost immunoassay system was designed, developed, and evaluated with several protein analytes. It features disposable capillary columns as the reaction sites and uses real-time calibration for improved accuracy. It reduces the overall assay time to less than 75 min with the ability of easy adaptation of new testing targets. The running cost is extremely low due to the nature of automation, as well as reduced material requirements. Details about system configuration, components selection, disposable fabrication, system assembly, and operation are reported. The performance of the system was initially established with a rabbit immunoglobulin G (IgG) assay, and an example of assay adaptation with an interleukin 6 (IL6) assay is shown. This system is ideal for research use, but could work for broader testing applications with further optimization.
Radar targets reveal all to automated tester
NASA Astrophysics Data System (ADS)
Hartman, R. E.
1985-09-01
Technological developments in the field of automated test equipment for low radar-cross-section (RCS) systems are reviewed. Emphasis is given to an Automated Digital Analysis and Measurement (ADAM) system for measuring, scattering, and evaluating RCS using a minicomputer in combination with a vector network analyzer and a positioner programmer. ADAM incorporates a stepped CW measurement technique to obtain RCS as a function of both range and frequency at a fixed aspect angle. The operating characteristics and calibration procedures of the ADAM system are described and estimates of RCS sensitivity are obtained. The response resolution of the ADAM system is estimated to be 36 cm per measurement bandwidth (in GHz) for a minimum window. A block diagram of the error checking routine of the ADAM system is provided.
Kern, Simon; Meyer, Klas; Guhl, Svetlana; Gräßer, Patrick; Paul, Andrea; King, Rudibert; Maiwald, Michael
2018-05-01
Monitoring specific chemical properties is the key to chemical process control. Today, mainly optical online methods are applied, which require time- and cost-intensive calibration effort. NMR spectroscopy, with its advantage being a direct comparison method without need for calibration, has a high potential for enabling closed-loop process control while exhibiting short set-up times. Compact NMR instruments make NMR spectroscopy accessible in industrial and rough environments for process monitoring and advanced process control strategies. We present a fully automated data analysis approach which is completely based on physically motivated spectral models as first principles information (indirect hard modeling-IHM) and applied it to a given pharmaceutical lithiation reaction in the framework of the European Union's Horizon 2020 project CONSENS. Online low-field NMR (LF NMR) data was analyzed by IHM with low calibration effort, compared to a multivariate PLS-R (partial least squares regression) approach, and both validated using online high-field NMR (HF NMR) spectroscopy. Graphical abstract NMR sensor module for monitoring of the aromatic coupling of 1-fluoro-2-nitrobenzene (FNB) with aniline to 2-nitrodiphenylamine (NDPA) using lithium-bis(trimethylsilyl) amide (Li-HMDS) in continuous operation. Online 43.5 MHz low-field NMR (LF) was compared to 500 MHz high-field NMR spectroscopy (HF) as reference method.
NASA Technical Reports Server (NTRS)
Crane, Robert K.; Wang, Xuhe; Westenhaver, David
1996-01-01
The preprocessing software manual describes the Actspp program originally developed to observe and diagnose Advanced Communications Technology Satellite (ACTS) propagation terminal/receiver problems. However, it has been quite useful for automating the preprocessing functions needed to convert the terminal output to useful attenuation estimates. Prior to having data acceptable for archival functions, the individual receiver system must be calibrated and the power level shifts caused by ranging tone modulation must be received. Actspp provides three output files: the daylog, the diurnal coefficient file, and the file that contains calibration information.
Calibration of a shock wave position sensor using artificial neural networks
NASA Technical Reports Server (NTRS)
Decker, Arthur J.; Weiland, Kenneth E.
1993-01-01
This report discusses the calibration of a shock wave position sensor. The position sensor works by using artificial neural networks to map cropped CCD frames of the shadows of the shock wave into the value of the shock wave position. This project was done as a tutorial demonstration of method and feasibility. It used a laboratory shadowgraph, nozzle, and commercial neural network package. The results were quite good, indicating that artificial neural networks can be used efficiently to automate the semi-quantitative applications of flow visualization.
Palmblad, Magnus; van der Burgt, Yuri E M; Dalebout, Hans; Derks, Rico J E; Schoenmaker, Bart; Deelder, André M
2009-05-02
Accurate mass determination enhances peptide identification in mass spectrometry based proteomics. We here describe the combination of two previously published open source software tools to improve mass measurement accuracy in Fourier transform ion cyclotron resonance mass spectrometry (FTICRMS). The first program, msalign, aligns one MS/MS dataset with one FTICRMS dataset. The second software, recal2, uses peptides identified from the MS/MS data for automated internal calibration of the FTICR spectra, resulting in sub-ppm mass measurement errors.
Calibration of areal surface topography measuring instruments
NASA Astrophysics Data System (ADS)
Seewig, J.; Eifler, M.
2017-06-01
The ISO standards which are related to the calibration of areal surface topography measuring instruments are the ISO 25178-6xx series which defines the relevant metrological characteristics for the calibration of different measuring principles and the ISO 25178-7xx series which defines the actual calibration procedures. As the field of areal measurement is however not yet fully standardized, there are still open questions to be addressed which are subject to current research. Based on this, selected research results of the authors in this area are presented. This includes the design and fabrication of areal material measures. For this topic, two examples are presented with the direct laser writing of a stepless material measure for the calibration of the height axis which is based on the Abbott- Curve and the manufacturing of a Siemens star for the determination of the lateral resolution limit. Based on these results, as well a new definition for the resolution criterion, the small scale fidelity, which is still under discussion, is presented. Additionally, a software solution for automated calibration procedures is outlined.
Measuring Pressure Has a New Standard
NASA Technical Reports Server (NTRS)
2002-01-01
The Force-Balanced Piston Gauge (FPG) tests and calibrates instrumentation operating in the low pressure range. The system provides a traceable, primary calibration standard for measuring pressures in the range of near 0 to 15 kPa (2.2 psi) in both gauge and absolute measurement modes. The hardware combines a large area piston-cylinder with a load cell measuring the force resulting from pressures across the piston. The mass of the piston can be tared out, allowing measurement to start from zero. A pressure higher than the measured pressure, which keeps the piston centered, lubricates an innovative conical gap located between the piston and the cylinder, eliminating the need for piston rotation. A pressure controller based on the control of low gas flow automates the pressure control. DHI markets the FPG as an automated primary standard for very low-gauge and absolute pressures. DHI is selling the FPG to high-end metrology laboratories on a case by case basis, with a full commercial release to follow.
Imaging workflow and calibration for CT-guided time-domain fluorescence tomography
Tichauer, Kenneth M.; Holt, Robert W.; El-Ghussein, Fadi; Zhu, Qun; Dehghani, Hamid; Leblond, Frederic; Pogue, Brian W.
2011-01-01
In this study, several key optimization steps are outlined for a non-contact, time-correlated single photon counting small animal optical tomography system, using simultaneous collection of both fluorescence and transmittance data. The system is presented for time-domain image reconstruction in vivo, illustrating the sensitivity from single photon counting and the calibration steps needed to accurately process the data. In particular, laser time- and amplitude-referencing, detector and filter calibrations, and collection of a suitable instrument response function are all presented in the context of time-domain fluorescence tomography and a fully automated workflow is described. Preliminary phantom time-domain reconstructed images demonstrate the fidelity of the workflow for fluorescence tomography based on signal from multiple time gates. PMID:22076264
NASA Technical Reports Server (NTRS)
Axholt, Magnus; Skoglund, Martin; Peterson, Stephen D.; Cooper, Matthew D.; Schoen, Thomas B.; Gustafsson, Fredrik; Ynnerman, Anders; Ellis, Stephen R.
2010-01-01
Augmented Reality (AR) is a technique by which computer generated signals synthesize impressions that are made to coexist with the surrounding real world as perceived by the user. Human smell, taste, touch and hearing can all be augmented, but most commonly AR refers to the human vision being overlaid with information otherwise not readily available to the user. A correct calibration is important on an application level, ensuring that e.g. data labels are presented at correct locations, but also on a system level to enable display techniques such as stereoscopy to function properly [SOURCE]. Thus, vital to AR, calibration methodology is an important research area. While great achievements already have been made, there are some properties in current calibration methods for augmenting vision which do not translate from its traditional use in automated cameras calibration to its use with a human operator. This paper uses a Monte Carlo simulation of a standard direct linear transformation camera calibration to investigate how user introduced head orientation noise affects the parameter estimation during a calibration procedure of an optical see-through head mounted display.
An automated calibration method for non-see-through head mounted displays.
Gilson, Stuart J; Fitzgibbon, Andrew W; Glennerster, Andrew
2011-08-15
Accurate calibration of a head mounted display (HMD) is essential both for research on the visual system and for realistic interaction with virtual objects. Yet, existing calibration methods are time consuming and depend on human judgements, making them error prone, and are often limited to optical see-through HMDs. Building on our existing approach to HMD calibration Gilson et al. (2008), we show here how it is possible to calibrate a non-see-through HMD. A camera is placed inside a HMD displaying an image of a regular grid, which is captured by the camera. The HMD is then removed and the camera, which remains fixed in position, is used to capture images of a tracked calibration object in multiple positions. The centroids of the markers on the calibration object are recovered and their locations re-expressed in relation to the HMD grid. This allows established camera calibration techniques to be used to recover estimates of the HMD display's intrinsic parameters (width, height, focal length) and extrinsic parameters (optic centre and orientation of the principal ray). We calibrated a HMD in this manner and report the magnitude of the errors between real image features and reprojected features. Our calibration method produces low reprojection errors without the need for error-prone human judgements. Copyright © 2011 Elsevier B.V. All rights reserved.
Multiple calibrator measurements improve accuracy and stability estimates of automated assays.
Akbas, Neval; Budd, Jeffrey R; Klee, George G
2016-01-01
The effects of combining multiple calibrations on assay accuracy (bias) and measurement of calibration stability were investigated for total triiodothyronine (TT3), vitamin B12 and luteinizing hormone (LH) using Beckman Coulter's Access 2 analyzer. Three calibration procedures (CC1, CC2 and CC3) combined 12, 34 and 56 calibrator measurements over 1, 2, and 3 days. Bias was calculated between target values and average measured value over 3 consecutive days after calibration. Using regression analysis of calibrator measurements versus measurement date, calibration stability was determined as the maximum number of days before a calibrator measurement exceeded 5% tolerance limits. Competitive assays (TT3, vitamin B12) had positive time regression slopes, while sandwich assay (LH) had a negative slope. Bias values for TT3 were -2.49%, 1.49%, and -0.50% using CC1, CC2 and CC3 respectively, with calibrator stability of 32, 20, and 30 days. Bias values for vitamin B12 were 2.44%, 0.91%, and -0.50%, with calibrator stability of 4, 9, and 12 days. Bias values for LH were 2.26%, 1.44% and -0.29% with calibrator stability of >43, 39 and 36 days. Measured stability was more consistent across calibration procedures using percent change rather than difference from target: 26 days for TT3, 12 days for B12 and 31 days for LH. Averaging over multiple calibrations produced smaller bias, consistent with improved accuracy. Time regression slopes in percent change were unaffected by number of calibration measurements but calibrator stability measured from the target value was highly affected by the calibrator value at time zero.
Development and application of an automated precision solar radiometer
NASA Astrophysics Data System (ADS)
Qiu, Gang-gang; Li, Xin; Zhang, Quan; Zheng, Xiao-bing; Yan, Jing
2016-10-01
Automated filed vicarious calibration is becoming a growing trend for satellite remote sensor, which require a solar radiometer have to automatic measure reliable data for a long time whatever the weather conditions and transfer measurement data to the user office. An automated precision solar radiometer has been developed. It is used in measuring the solar spectral irradiance received at the Earth surface. The instrument consists of 8 parallel separate silicon-photodiode-based channels with narrow band-pass filters from the visible to near-IR regions. Each channel has a 2.0° full-angle Filed of View (FOV). The detectors and filters are temperature stabilized using a Thermal Energy Converter at 30+/-0.2°. The instrument is pointed toward the sun via an auto-tracking system that actively tracks the sun within a +/-0.1°. It collects data automatically and communicates with user terminal through BDS (China's BeiDou Navigation Satellite System) while records data as a redundant in internal memory, including working state and error. The solar radiometer is automated in the sense that it requires no supervision throughout the whole process of working. It calculates start-time and stop-time every day matched with the time of sunrise and sunset, and stop working once the precipitation. Calibrated via Langley curves and simultaneous observed with CE318, the different of Aerosol Optical Depth (AOD) is within 5%. The solar radiometer had run in all kinds of harsh weather condition in Gobi in Dunhuang and obtain the AODs nearly eight months continuously. This paper presents instrument design analysis, atmospheric optical depth retrievals as well as the experiment result.
21 CFR 111.25 - What are the requirements under this subpart D for written procedures?
Code of Federal Regulations, 2012 CFR
2012-04-01
... MANUFACTURING, PACKAGING, LABELING, OR HOLDING OPERATIONS FOR DIETARY SUPPLEMENTS Equipment and Utensils § 111... dietary supplement; (b) Calibrating, inspecting, and checking automated, mechanical, and electronic... other contact surfaces that are used to manufacture, package, label, or hold components or dietary...
21 CFR 111.25 - What are the requirements under this subpart D for written procedures?
Code of Federal Regulations, 2014 CFR
2014-04-01
... MANUFACTURING, PACKAGING, LABELING, OR HOLDING OPERATIONS FOR DIETARY SUPPLEMENTS Equipment and Utensils § 111... dietary supplement; (b) Calibrating, inspecting, and checking automated, mechanical, and electronic... other contact surfaces that are used to manufacture, package, label, or hold components or dietary...
21 CFR 111.35 - Under this subpart D, what records must you make and keep?
Code of Federal Regulations, 2012 CFR
2012-04-01
... MANUFACTURING, PACKAGING, LABELING, OR HOLDING OPERATIONS FOR DIETARY SUPPLEMENTS Equipment and Utensils § 111... or dietary supplement; (ii) Calibrating, inspecting, and checking automated, mechanical, and... dietary supplements; (2) Documentation, in individual equipment logs, of the date of the use, maintenance...
21 CFR 111.35 - Under this subpart D, what records must you make and keep?
Code of Federal Regulations, 2014 CFR
2014-04-01
... MANUFACTURING, PACKAGING, LABELING, OR HOLDING OPERATIONS FOR DIETARY SUPPLEMENTS Equipment and Utensils § 111... or dietary supplement; (ii) Calibrating, inspecting, and checking automated, mechanical, and... dietary supplements; (2) Documentation, in individual equipment logs, of the date of the use, maintenance...
User-friendly freehand ultrasound calibration using Lego bricks and automatic registration.
Xiao, Yiming; Yan, Charles Xiao Bo; Drouin, Simon; De Nigris, Dante; Kochanowska, Anna; Collins, D Louis
2016-09-01
As an inexpensive, noninvasive, and portable clinical imaging modality, ultrasound (US) has been widely employed in many interventional procedures for monitoring potential tissue deformation, surgical tool placement, and locating surgical targets. The application requires the spatial mapping between 2D US images and 3D coordinates of the patient. Although positions of the devices (i.e., ultrasound transducer) and the patient can be easily recorded by a motion tracking system, the spatial relationship between the US image and the tracker attached to the US transducer needs to be estimated through an US calibration procedure. Previously, various calibration techniques have been proposed, where a spatial transformation is computed to match the coordinates of corresponding features in a physical phantom and those seen in the US scans. However, most of these methods are difficult to use for novel users. We proposed an ultrasound calibration method by constructing a phantom from simple Lego bricks and applying an automated multi-slice 2D-3D registration scheme without volumetric reconstruction. The method was validated for its calibration accuracy and reproducibility. Our method yields a calibration accuracy of [Formula: see text] mm and a calibration reproducibility of 1.29 mm. We have proposed a robust, inexpensive, and easy-to-use ultrasound calibration method.
Krecsák, László; Micsik, Tamás; Kiszler, Gábor; Krenács, Tibor; Szabó, Dániel; Jónás, Viktor; Császár, Gergely; Czuni, László; Gurzó, Péter; Ficsor, Levente; Molnár, Béla
2011-01-18
The immunohistochemical detection of estrogen (ER) and progesterone (PR) receptors in breast cancer is routinely used for prognostic and predictive testing. Whole slide digitalization supported by dedicated software tools allows quantization of the image objects (e.g. cell membrane, nuclei) and an unbiased analysis of immunostaining results. Validation studies of image analysis applications for the detection of ER and PR in breast cancer specimens provided strong concordance between the pathologist's manual assessment of slides and scoring performed using different software applications. The effectiveness of two connected semi-automated image analysis software (NuclearQuant v. 1.13 application for Pannoramic™ Viewer v. 1.14) for determination of ER and PR status in formalin-fixed paraffin embedded breast cancer specimens immunostained with the automated Leica Bond Max system was studied. First the detection algorithm was calibrated to the scores provided an independent assessors (pathologist), using selected areas from 38 small digital slides (created from 16 cases) containing a mean number of 195 cells. Each cell was manually marked and scored according to the Allred-system combining frequency and intensity scores. The performance of the calibrated algorithm was tested on 16 cases (14 invasive ductal carcinoma, 2 invasive lobular carcinoma) against the pathologist's manual scoring of digital slides. The detection was calibrated to 87 percent object detection agreement and almost perfect Total Score agreement (Cohen's kappa 0.859, quadratic weighted kappa 0.986) from slight or moderate agreement at the start of the study, using the un-calibrated algorithm. The performance of the application was tested against the pathologist's manual scoring of digital slides on 53 regions of interest of 16 ER and PR slides covering all positivity ranges, and the quadratic weighted kappa provided almost perfect agreement (κ = 0.981) among the two scoring schemes. NuclearQuant v. 1.13 application for Pannoramic™ Viewer v. 1.14 software application proved to be a reliable image analysis tool for pathologists testing ER and PR status in breast cancer.
Analysis of trust in autonomy for convoy operations
NASA Astrophysics Data System (ADS)
Gremillion, Gregory M.; Metcalfe, Jason S.; Marathe, Amar R.; Paul, Victor J.; Christensen, James; Drnec, Kim; Haynes, Benjamin; Atwater, Corey
2016-05-01
With growing use of automation in civilian and military contexts that engage cooperatively with humans, the operator's level of trust in the automated system is a major factor in determining the efficacy of the human-autonomy teams. Suboptimal levels of human trust in autonomy (TiA) can be detrimental to joint team performance. This mis-calibrated trust can manifest in several ways, such as distrust and complete disuse of the autonomy or complacency, which results in an unsupervised autonomous system. This work investigates human behaviors that may reflect TiA in the context of an automated driving task, with the goal of improving team performance. Subjects performed a simulated leaderfollower driving task with an automated driving assistant. The subjects had could choose to engage an automated lane keeping and active cruise control system of varying performance levels. Analysis of the experimental data was performed to identify contextual features of the simulation environment that correlated to instances of automation engagement and disengagement. Furthermore, behaviors that potentially indicate inappropriate TiA levels were identified in the subject trials using estimates of momentary risk and agent performance, as functions of these contextual features. Inter-subject and intra-subject trends in automation usage and performance were also identified. This analysis indicated that for poorer performing automation, TiA decreases with time, while higher performing automation induces less drift toward diminishing usage, and in some cases increases in TiA. Subject use of automation was also found to be largely influenced by course features.
How to constrain multi-objective calibrations of the SWAT model using water balance components
USDA-ARS?s Scientific Manuscript database
Automated procedures are often used to provide adequate fits between hydrologic model estimates and observed data. While the models may provide good fits based upon numeric criteria, they may still not accurately represent the basic hydrologic characteristics of the represented watershed. Here we ...
Radiometric characterization of hyperspectral imagers using multispectral sensors
NASA Astrophysics Data System (ADS)
McCorkel, Joel; Thome, Kurt; Leisso, Nathan; Anderson, Nikolaus; Czapla-Myers, Jeff
2009-08-01
The Remote Sensing Group (RSG) at the University of Arizona has a long history of using ground-based test sites for the calibration of airborne and satellite based sensors. Often, ground-truth measurements at these tests sites are not always successful due to weather and funding availability. Therefore, RSG has also employed automated ground instrument approaches and cross-calibration methods to verify the radiometric calibration of a sensor. The goal in the cross-calibration method is to transfer the calibration of a well-known sensor to that of a different sensor. This work studies the feasibility of determining the radiometric calibration of a hyperspectral imager using multispectral imagery. The work relies on the Moderate Resolution Imaging Spectroradiometer (MODIS) as a reference for the hyperspectral sensor Hyperion. Test sites used for comparisons are Railroad Valley in Nevada and a portion of the Libyan Desert in North Africa. Hyperion bands are compared to MODIS by band averaging Hyperion's high spectral resolution data with the relative spectral response of MODIS. The results compare cross-calibration scenarios that differ in image acquisition coincidence, test site used for the calibration, and reference sensor. Cross-calibration results are presented that show agreement between the use of coincident and non-coincident image pairs within 2% in most bands as well as similar agreement between results that employ the different MODIS sensors as a reference.
Radiometric Characterization of Hyperspectral Imagers using Multispectral Sensors
NASA Technical Reports Server (NTRS)
McCorkel, Joel; Kurt, Thome; Leisso, Nathan; Anderson, Nikolaus; Czapla-Myers, Jeff
2009-01-01
The Remote Sensing Group (RSG) at the University of Arizona has a long history of using ground-based test sites for the calibration of airborne and satellite based sensors. Often, ground-truth measurements at these test sites are not always successful due to weather and funding availability. Therefore, RSG has also automated ground instrument approaches and cross-calibration methods to verify the radiometric calibration of a sensor. The goal in the cross-calibration method is to transfer the calibration of a well-known sensor to that of a different sensor, This work studies the feasibility of determining the radiometric calibration of a hyperspectral imager using multispectral a imagery. The work relies on the Moderate Resolution Imaging Spectroradiometer (M0DIS) as a reference for the hyperspectral sensor Hyperion. Test sites used for comparisons are Railroad Valley in Nevada and a portion of the Libyan Desert in North Africa. Hyperion bands are compared to MODIS by band averaging Hyperion's high spectral resolution data with the relative spectral response of M0DlS. The results compare cross-calibration scenarios that differ in image acquisition coincidence, test site used for the calibration, and reference sensor. Cross-calibration results are presented that show agreement between the use of coincident and non-coincident image pairs within 2% in most brands as well as similar agreement between results that employ the different MODIS sensors as a reference.
Twelve automated thresholding methods for segmentation of PET images: a phantom study.
Prieto, Elena; Lecumberri, Pablo; Pagola, Miguel; Gómez, Marisol; Bilbao, Izaskun; Ecay, Margarita; Peñuelas, Iván; Martí-Climent, Josep M
2012-06-21
Tumor volume delineation over positron emission tomography (PET) images is of great interest for proper diagnosis and therapy planning. However, standard segmentation techniques (manual or semi-automated) are operator dependent and time consuming while fully automated procedures are cumbersome or require complex mathematical development. The aim of this study was to segment PET images in a fully automated way by implementing a set of 12 automated thresholding algorithms, classical in the fields of optical character recognition, tissue engineering or non-destructive testing images in high-tech structures. Automated thresholding algorithms select a specific threshold for each image without any a priori spatial information of the segmented object or any special calibration of the tomograph, as opposed to usual thresholding methods for PET. Spherical (18)F-filled objects of different volumes were acquired on clinical PET/CT and on a small animal PET scanner, with three different signal-to-background ratios. Images were segmented with 12 automatic thresholding algorithms and results were compared with the standard segmentation reference, a threshold at 42% of the maximum uptake. Ridler and Ramesh thresholding algorithms based on clustering and histogram-shape information, respectively, provided better results that the classical 42%-based threshold (p < 0.05). We have herein demonstrated that fully automated thresholding algorithms can provide better results than classical PET segmentation tools.
Twelve automated thresholding methods for segmentation of PET images: a phantom study
NASA Astrophysics Data System (ADS)
Prieto, Elena; Lecumberri, Pablo; Pagola, Miguel; Gómez, Marisol; Bilbao, Izaskun; Ecay, Margarita; Peñuelas, Iván; Martí-Climent, Josep M.
2012-06-01
Tumor volume delineation over positron emission tomography (PET) images is of great interest for proper diagnosis and therapy planning. However, standard segmentation techniques (manual or semi-automated) are operator dependent and time consuming while fully automated procedures are cumbersome or require complex mathematical development. The aim of this study was to segment PET images in a fully automated way by implementing a set of 12 automated thresholding algorithms, classical in the fields of optical character recognition, tissue engineering or non-destructive testing images in high-tech structures. Automated thresholding algorithms select a specific threshold for each image without any a priori spatial information of the segmented object or any special calibration of the tomograph, as opposed to usual thresholding methods for PET. Spherical 18F-filled objects of different volumes were acquired on clinical PET/CT and on a small animal PET scanner, with three different signal-to-background ratios. Images were segmented with 12 automatic thresholding algorithms and results were compared with the standard segmentation reference, a threshold at 42% of the maximum uptake. Ridler and Ramesh thresholding algorithms based on clustering and histogram-shape information, respectively, provided better results that the classical 42%-based threshold (p < 0.05). We have herein demonstrated that fully automated thresholding algorithms can provide better results than classical PET segmentation tools.
Determination of $sup 241$Am in soil using an automated nuclear radiation measurement laboratory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Engstrom, D.E.; White, M.G.; Dunaway, P.B.
The recent completion of REECo's Automated Laboratory and associated software systems has provided a significant increase in capability while reducing manpower requirements. The system is designed to perform gamma spectrum analyses on the large numbers of samples required by the current Nevada Applied Ecology Group (NAEG) and Plutonium Distribution Inventory Program (PDIP) soil sampling programs while maintaining sufficient sensitivities as defined by earlier investigations of the same type. The hardware and systems are generally described in this paper, with emphasis being placed on spectrum reduction and the calibration procedures used for soil samples. (auth)
An Imaging System for Satellite Hypervelocity Impact Debris Characterization
NASA Astrophysics Data System (ADS)
Moraguez, M.; Liou, J.; Fitz-Coy, N.; Patankar, K.; Cowardin, H.
This paper discusses the design of an automated imaging system for size characterization of debris produced by the DebriSat hypervelocity impact test. The goal of the DebriSat project is to update satellite breakup models. A representative LEO satellite, DebriSat, was constructed and subjected to a hypervelocity impact test. The impact produced an estimated 85,000 debris fragments. The size distribution of these fragments is required to update the current satellite breakup models. An automated imaging system was developed for the size characterization of the debris fragments. The system uses images taken from various azimuth and elevation angles around the object to produce a 3D representation of the fragment via a space carving algorithm. The system consists of N point-and-shoot cameras attached to a rigid support structure that defines the elevation angle for each camera. The debris fragment is placed on a turntable that is incrementally rotated to desired azimuth angles. The number of images acquired can be varied based on the desired resolution. Appropriate background and lighting is used for ease of object detection. The system calibration and image acquisition process are automated to result in push-button operations. However, for quality assurance reasons, the system is semi-autonomous by design to ensure operator involvement. This paper describes the imaging system setup, calibration procedure, repeatability analysis, and the results of the debris characterization.
The Automation and Exoplanet Orbital Characterization from the Gemini Planet Imager Exoplanet Survey
NASA Astrophysics Data System (ADS)
Jinfei Wang, Jason; Graham, James; Perrin, Marshall; Pueyo, Laurent; Savransky, Dmitry; Kalas, Paul; arriaga, Pauline; Chilcote, Jeffrey K.; De Rosa, Robert J.; Ruffio, Jean-Baptiste; Sivaramakrishnan, Anand; Gemini Planet Imager Exoplanet Survey Collaboration
2018-01-01
The Gemini Planet Imager (GPI) Exoplanet Survey (GPIES) is a multi-year 600-star survey to discover and characterize young Jovian exoplanets and their planet forming environments. For large surveys like GPIES, it is critical to have a uniform dataset processed with the latest techniques and calibrations. I will describe the GPI Data Cruncher, an automated data processing framework that is able to generate fully reduced data minutes after the data are taken and can also reprocess the entire campaign in a single day on a supercomputer. The Data Cruncher integrates into a larger automated data processing infrastructure which syncs, logs, and displays the data. I will discuss the benefits of the GPIES data infrastructure, including optimizing observing strategies, finding planets, characterizing instrument performance, and constraining giant planet occurrence. I will also discuss my work in characterizing the exoplanets we have imaged in GPIES through monitoring their orbits. Using advanced data processing algorithms and GPI's precise astrometric calibration, I will show that GPI can achieve one milliarcsecond astrometry on the extensively-studied planet Beta Pic b. With GPI, we can confidently rule out a possible transit of Beta Pic b, but have precise timings on a Hill sphere transit, and I will discuss efforts to search for transiting circumplanetary material this year. I will also discuss the orbital monitoring of other exoplanets as part of GPIES.
An Imaging System for Satellite Hypervelocity Impact Debris Characterization
NASA Technical Reports Server (NTRS)
Moraguez, Matthew; Patankar, Kunal; Fitz-Coy, Norman; Liou, J.-C.; Cowardin, Heather
2015-01-01
This paper discusses the design of an automated imaging system for size characterization of debris produced by the DebriSat hypervelocity impact test. The goal of the DebriSat project is to update satellite breakup models. A representative LEO satellite, DebriSat, was constructed and subjected to a hypervelocity impact test. The impact produced an estimated 85,000 debris fragments. The size distribution of these fragments is required to update the current satellite breakup models. An automated imaging system was developed for the size characterization of the debris fragments. The system uses images taken from various azimuth and elevation angles around the object to produce a 3D representation of the fragment via a space carving algorithm. The system consists of N point-and-shoot cameras attached to a rigid support structure that defines the elevation angle for each camera. The debris fragment is placed on a turntable that is incrementally rotated to desired azimuth angles. The number of images acquired can be varied based on the desired resolution. Appropriate background and lighting is used for ease of object detection. The system calibration and image acquisition process are automated to result in push-button operations. However, for quality assurance reasons, the system is semi-autonomous by design to ensure operator involvement. This paper describes the imaging system setup, calibration procedure, repeatability analysis, and the results of the debris characterization.
Measuring the orthogonality error of coil systems
Heilig, B.; Csontos, A.; Pajunpää, K.; White, Tim; St. Louis, B.; Calp, D.
2012-01-01
Recently, a simple method was proposed for the determination of pitch angle between two coil axes by means of a total field magnetometer. The method is applicable when the homogeneous volume in the centre of the coil system is large enough to accommodate the total field sensor. Orthogonality of calibration coil systems used for calibrating vector magnetometers can be attained by this procedure. In addition, the method can be easily automated and applied to the calibration of delta inclination–delta declination (dIdD) magnetometers. The method was tested by several independent research groups, having a variety of test equipment, and located at differing geomagnetic observatories, including: Nurmijärvi, Finland; Hermanus, South Africa; Ottawa, Canada; Tihany, Hungary. This paper summarizes the test results, and discusses the advantages and limitations of the method.
HoloHands: games console interface for controlling holographic optical manipulation
NASA Astrophysics Data System (ADS)
McDonald, C.; McPherson, M.; McDougall, C.; McGloin, D.
2013-03-01
The increasing number of applications for holographic manipulation techniques has sparked the development of more accessible control interfaces. Here, we describe a holographic optical tweezers experiment which is controlled by gestures that are detected by a Microsoft Kinect. We demonstrate that this technique can be used to calibrate the tweezers using the Stokes drag method and compare this to automated calibrations. We also show that multiple particle manipulation can be handled. This is a promising new line of research for gesture-based control which could find applications in a wide variety of experimental situations.
Behavior driven testing in ALMA telescope calibration software
NASA Astrophysics Data System (ADS)
Gil, Juan P.; Garces, Mario; Broguiere, Dominique; Shen, Tzu-Chiang
2016-07-01
ALMA software development cycle includes well defined testing stages that involves developers, testers and scientists. We adapted Behavior Driven Development (BDD) to testing activities applied to Telescope Calibration (TELCAL) software. BDD is an agile technique that encourages communication between roles by defining test cases using natural language to specify features and scenarios, what allows participants to share a common language and provides a high level set of automated tests. This work describes how we implemented and maintain BDD testing for TELCAL, the infrastructure needed to support it and proposals to expand this technique to other subsystems.
KINEROS2-AGWA: Model Use, Calibration, and Validation
NASA Technical Reports Server (NTRS)
Goodrich, D C.; Burns, I. S.; Unkrich, C. L.; Semmens, D. J.; Guertin, D. P.; Hernandez, M.; Yatheendradas, S.; Kennedy, J. R.; Levick, L. R..
2013-01-01
KINEROS (KINematic runoff and EROSion) originated in the 1960s as a distributed event-based model that conceptualizes a watershed as a cascade of overland flow model elements that flow into trapezoidal channel model elements. KINEROS was one of the first widely available watershed models that interactively coupled a finite difference approximation of the kinematic overland flow equations to a physically based infiltration model. Development and improvement of KINEROS continued from the 1960s on a variety of projects for a range of purposes, which has resulted in a suite of KINEROS-based modeling tools. This article focuses on KINEROS2 (K2), a spatially distributed, event-based watershed rainfall-runoff and erosion model, and the companion ArcGIS-based Automated Geospatial Watershed Assessment (AGWA) tool. AGWA automates the time-consuming tasks of watershed delineation into distributed model elements and initial parameterization of these elements using commonly available, national GIS data layers. A variety of approaches have been used to calibrate and validate K2 successfully across a relatively broad range of applications (e.g., urbanization, pre- and post-fire, hillslope erosion, erosion from roads, runoff and recharge, and manure transport). The case studies presented in this article (1) compare lumped to stepwise calibration and validation of runoff and sediment at plot, hillslope, and small watershed scales; and (2) demonstrate an uncalibrated application to address relative change in watershed response to wildfire.
KINEROS2/AGWA: Model use, calibration and validation
Goodrich, D.C.; Burns, I.S.; Unkrich, C.L.; Semmens, Darius J.; Guertin, D.P.; Hernandez, M.; Yatheendradas, S.; Kennedy, Jeffrey R.; Levick, Lainie R.
2012-01-01
KINEROS (KINematic runoff and EROSion) originated in the 1960s as a distributed event-based model that conceptualizes a watershed as a cascade of overland flow model elements that flow into trapezoidal channel model elements. KINEROS was one of the first widely available watershed models that interactively coupled a finite difference approximation of the kinematic overland flow equations to a physically based infiltration model. Development and improvement of KINEROS continued from the 1960s on a variety of projects for a range of purposes, which has resulted in a suite of KINEROS-based modeling tools. This article focuses on KINEROS2 (K2), a spatially distributed, event-based watershed rainfall-runoff and erosion model, and the companion ArcGIS-based Automated Geospatial Watershed Assessment (AGWA) tool. AGWA automates the time-consuming tasks of watershed delineation into distributed model elements and initial parameterization of these elements using commonly available, national GIS data layers. A variety of approaches have been used to calibrate and validate K2 successfully across a relatively broad range of applications (e.g., urbanization, pre- and post-fire, hillslope erosion, erosion from roads, runoff and recharge, and manure transport). The case studies presented in this article (1) compare lumped to stepwise calibration and validation of runoff and sediment at plot, hillslope, and small watershed scales; and (2) demonstrate an uncalibrated application to address relative change in watershed response to wildfire.
NASA Technical Reports Server (NTRS)
Czapla-Myers, J.; Thome, K.; Anderson, N.; McCorkel, J.; Leisso, N.; Good, W.; Collins, S.
2009-01-01
Ball Aerospace and Technologies Corporation in Boulder, Colorado, has developed a heliostat facility that will be used to determine the preflight radiometric calibration of Earth-observing sensors that operate in the solar-reflective regime. While automatically tracking the Sun, the heliostat directs the solar beam inside a thermal vacuum chamber, where the sensor under test resides. The main advantage to using the Sun as the illumination source for preflight radiometric calibration is because it will also be the source of illumination when the sensor is in flight. This minimizes errors in the pre- and post-launch calibration due to spectral mismatches. It also allows the instrument under test to operate at irradiance values similar to those on orbit. The Remote Sensing Group at the University of Arizona measured the transmittance of the heliostat facility using three methods, the first of which is a relative measurement made using a hyperspectral portable spectroradiometer and well-calibrated reference panel. The second method is also a relative measurement, and uses a 12-channel automated solar radiometer. The final method is an absolute measurement using a hyperspectral spectroradiometer and reference panel combination, where the spectroradiometer is calibrated on site using a solar-radiation-based calibration.
Automated system for the calibration of magnetometers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Petrucha, Vojtech; Kaspar, Petr; Ripka, Pavel
2009-04-01
A completely nonmagnetic calibration platform has been developed and constructed at DTU Space (Technical University of Denmark). It is intended for on-site scalar calibration of high-precise fluxgate magnetometers. An enhanced version of the same platform is being built at the Czech Technical University. There are three axes of rotation in this design (compared to two axes in the previous version). The addition of the third axis allows us to calibrate more complex devices. An electronic compass based on a vector fluxgate magnetometer and micro electro mechanical systems (MEMS) accelerometer is one example. The new platform can also be used tomore » evaluate the parameters of the compass in all possible variations in azimuth, pitch, and roll. The system is based on piezoelectric motors, which are placed on a platform made of aluminum, brass, plastic, and glass. Position sensing is accomplished through custom-made optical incremental sensors. The system is controlled by a microcontroller, which executes commands from a computer. The properties of the system as well as calibration and measurement results will be presented.« less
Concentration Independent Calibration of β-γ Coincidence Detector Using 131mXe and 133Xe
DOE Office of Scientific and Technical Information (OSTI.GOV)
McIntyre, Justin I.; Cooper, Matthew W.; Carman, April J.
Absolute efficiency calibration of radiometric detectors is frequently difficult and requires careful detector modeling and accurate knowledge of the radioactive source used. In the past we have calibrated the b-g coincidence detector of the Automated Radioxenon Sampler/Analyzer (ARSA) using a variety of sources and techniques which have proven to be less than desirable.[1] A superior technique has been developed that uses the conversion-electron (CE) and x-ray coincidence of 131mXe to provide a more accurate absolute gamma efficiency of the detector. The 131mXe is injected directly into the beta cell of the coincident counting system and no knowledge of absolute sourcemore » strength is required. In addition, 133Xe is used to provide a second independent means to obtain the absolute efficiency calibration. These two data points provide the necessary information for calculating the detector efficiency and can be used in conjunction with other noble gas isotopes to completely characterize and calibrate the ARSA nuclear detector. In this paper we discuss the techniques and results that we have obtained.« less
Wireless energizing system for an automated implantable sensor.
Swain, Biswaranjan; Nayak, Praveen P; Kar, Durga P; Bhuyan, Satyanarayan; Mishra, Laxmi P
2016-07-01
The wireless drive of an automated implantable electronic sensor has been explored for health monitoring applications. The proposed system comprises of an automated biomedical sensing system which is energized through resonant inductive coupling. The implantable sensor unit is able to monitor the body temperature parameter and sends back the corresponding telemetry data wirelessly to the data recoding unit. It has been observed that the wireless power delivery system is capable of energizing the automated biomedical implantable electronic sensor placed over a distance of 3 cm from the power transmitter with an energy transfer efficiency of 26% at the operating resonant frequency of 562 kHz. This proposed method ensures real-time monitoring of different human body temperatures around the clock. The monitored temperature data have been compared with a calibrated temperature measurement system to ascertain the accuracy of the proposed system. The investigated technique can also be useful for monitoring other body parameters such as blood pressure, bladder pressure, and physiological signals of the patient in vivo using various implantable sensors.
On the Automation of the MarkIII Data Analysis System.
NASA Astrophysics Data System (ADS)
Schwegmann, W.; Schuh, H.
1999-03-01
A faster and semiautomatic data analysis is an important contribution to the acceleration of the VLBI procedure. A concept for the automation of one of the most widely used VLBI software packages the MarkIII Data Analysis System was developed. Then, the program PWXCB, which extracts weather and cable calibration data from the station log-files, was automated supplementing the existing Fortran77 program-code. The new program XLOG and its results will be presented. Most of the tasks in the VLBI data analysis are very complex and their automation requires typical knowledge-based techniques. Thus, a knowledge-based system (KBS) for support and guidance of the analyst is being developed using the AI-workbench BABYLON, which is based on methods of artificial intelligence (AI). The advantages of a KBS for the MarkIII Data Analysis System and the required steps to build a KBS will be demonstrated. Examples about the current status of the project will be given, too.
NASA Astrophysics Data System (ADS)
Li, Helen; Lee, Robben; Lee, Tyzy; Xue, Teddy; Liu, Hermes; Wu, Hall; Wan, Qijian; Du, Chunshan; Hu, Xinyi; Liu, Zhengfang
2018-03-01
As technology advances, escalating layout design complexity and chip size make defect inspection becomes more challenging than ever before. The YE (Yield Enhancement) engineers are seeking for an efficient strategy to ensure accuracy without suffering running time. A smart way is to set different resolutions for different pattern structures, for examples, logic pattern areas have a higher scan resolution while the dummy areas have a lower resolution, SRAM area may have another different resolution. This can significantly reduce the scan processing time meanwhile the accuracy does not suffer. Due to the limitation of the inspection equipment, the layout must be processed in order to output the Care Area marker in line with the requirement of the equipment, for instance, the marker shapes must be rectangle and the number of the rectangle shapes should be as small as possible. The challenge is how to select the different Care Areas by pattern structures, merge the areas efficiently and then partition them into pieces of rectangle shapes. This paper presents a solution based on Calibre DRC and Pattern Matching. Calibre equation-based DRC is a powerful layout processing engine and Calibre Pattern Matching's automated visual capture capability enables designers to define these geometries as layout patterns and store them in libraries which can be re-used in multiple design layouts. Pattern Matching simplifies the description of very complex relationships between pattern shapes efficiently and accurately. Pattern matching's true power is on display when it is integrated with normal DRC deck. In this application of defects inspection, we first run Calibre DRC to get rule based Care Area then use Calibre Pattern Matching's automated pattern capture capability to capture Care Area shapes which need a higher scan resolution with a tune able pattern halo. In the pattern matching step, when the patterns are matched, a bounding box marker will be output to identify the high resolution area. The equation-based DRC and Pattern Matching effectively work together for different scan phases.
NASA Astrophysics Data System (ADS)
Karsten, L. R.; Gochis, D.; Dugger, A. L.; McCreight, J. L.; Barlage, M. J.; Fall, G. M.; Olheiser, C.
2017-12-01
Since version 1.0 of the National Water Model (NWM) has gone operational in Summer 2016, several upgrades to the model have occurred to improve hydrologic prediction for the continental United States. Version 1.1 of the NWM (Spring 2017) includes upgrades to parameter datasets impacting land surface hydrologic processes. These parameter datasets were upgraded using an automated calibration workflow that utilizes the Dynamic Data Search (DDS) algorithm to adjust parameter values using observed streamflow. As such, these upgrades to parameter values took advantage of various observations collected for snow analysis. In particular, in-situ SNOTEL observations in the Western US, volunteer in-situ observations across the entire US, gamma-derived snow water equivalent (SWE) observations courtesy of the NWS NOAA Corps program, gridded snow depth and SWE products from the Jet Propulsion Laboratory (JPL) Airborne Snow Observatory (ASO), gridded remotely sensed satellite-based snow products (MODIS,AMSR2,VIIRS,ATMS), and gridded SWE from the NWS Snow Data Assimilation System (SNODAS). This study explores the use of these observations to quantify NWM error and improvements from version 1.0 to version 1.1, along with subsequent work since then. In addition, this study explores the use of snow observations for use within the automated calibration workflow. Gridded parameter fields impacting the accumulation and ablation of snow states in the NWM were adjusted and calibrated using gridded remotely sensed snow states, SNODAS products, and in-situ snow observations. This calibration adjustment took place over various ecological regions in snow-dominated parts of the US for a retrospective period of time to capture a variety of climatological conditions. Specifically, the latest calibrated parameters impacting streamflow were held constant and only parameters impacting snow physics were tuned using snow observations and analysis. The adjusted parameter datasets were then used to force the model over an independent period for analysis against both snow and streamflow observations to see if improvements took place. The goal of this work is to further improve snow physics in the NWM, along with identifying areas where further work will take place in the future, such as data assimilation or further forcing improvements.
Cierkens, Katrijn; Plano, Salvatore; Benedetti, Lorenzo; Weijers, Stefan; de Jonge, Jarno; Nopens, Ingmar
2012-01-01
Application of activated sludge models (ASMs) to full-scale wastewater treatment plants (WWTPs) is still hampered by the problem of model calibration of these over-parameterised models. This either requires expert knowledge or global methods that explore a large parameter space. However, a better balance in structure between the submodels (ASM, hydraulic, aeration, etc.) and improved quality of influent data result in much smaller calibration efforts. In this contribution, a methodology is proposed that links data frequency and model structure to calibration quality and output uncertainty. It is composed of defining the model structure, the input data, an automated calibration, confidence interval computation and uncertainty propagation to the model output. Apart from the last step, the methodology is applied to an existing WWTP using three models differing only in the aeration submodel. A sensitivity analysis was performed on all models, allowing the ranking of the most important parameters to select in the subsequent calibration step. The aeration submodel proved very important to get good NH(4) predictions. Finally, the impact of data frequency was explored. Lowering the frequency resulted in larger deviations of parameter estimates from their default values and larger confidence intervals. Autocorrelation due to high frequency calibration data has an opposite effect on the confidence intervals. The proposed methodology opens doors to facilitate and improve calibration efforts and to design measurement campaigns.
Data-Acquisition System With Remotely Adjustable Amplifiers
NASA Technical Reports Server (NTRS)
Nurge, Mark A.; Larson, William E.; Hallberg, Carl G.; Thayer, Steven W.; Ake, Jeffrey C.; Gleman, Stuart M.; Thompson, David L.; Medelius, Pedro J.; Crawford, Wayne A.; Vangilder, Richard M.;
1994-01-01
Improved data-acquisition system has both centralized and decentralized characteristics developed. Provides infrastructure for automation and standardization of operation, maintenance, calibration, and adjustment of many transducers. Increases efficiency by reducing need for diminishing work force of highly trained technicians to perform routine tasks. Large industrial and academic laboratory facilities benefit from systems like this one.
A Quantitative Microbial Risk Assessment (QMRA) infrastructure that automates the manual process of characterizing transport of pathogens and microorganisms, from the source of release to a point of exposure, has been developed by loosely configuring a set of modules and process-...
An automated two-dimensional optical force clamp for single molecule studies.
Lang, Matthew J; Asbury, Charles L; Shaevitz, Joshua W; Block, Steven M
2002-01-01
We constructed a next-generation optical trapping instrument to study the motility of single motor proteins, such as kinesin moving along a microtubule. The instrument can be operated as a two-dimensional force clamp, applying loads of fixed magnitude and direction to motor-coated microscopic beads moving in vitro. Flexibility and automation in experimental design are achieved by computer control of both the trap position, via acousto-optic deflectors, and the sample position, using a three-dimensional piezo stage. Each measurement is preceded by an initialization sequence, which includes adjustment of bead height relative to the coverslip using a variant of optical force microscopy (to +/-4 nm), a two-dimensional raster scan to calibrate position detector response, and adjustment of bead lateral position relative to the microtubule substrate (to +/-3 nm). During motor-driven movement, both the trap and stage are moved dynamically to apply constant force while keeping the trapped bead within the calibrated range of the detector. We present details of force clamp operation and preliminary data showing kinesin motor movement subject to diagonal and forward loads. PMID:12080136
Mastin, M.C.; Le, Thanh
2001-01-01
The U.S. Geological Survey, in cooperation with Pierce County Department of Public Works, Washington, has developed an operational tool called the Puyallup Flood-Alert System to alert users of impending floods in the Puyallup River Basin. The system acquires and incorporates meteorological and hydrological data into the Streamflow Synthesis and Reservoir Regulation (SSARR) hydrologic flow-routing model to simulate floods in the Puyallup River Basin. SSARRMENU is the user-interactive graphical interface between the user, the input and output data, and the SSARR model. In a companion cooperative project with Pierce County, the SSARR model for the Puyallup River Basin was calibrated and validated. The calibrated model is accessed through SSARRMENU, which has been specifically programed for the Puyallup River and the needs of Pierce County. SSARRMENU automates the retrieval of data from ADAPS (Automated DAta Processing System, the U.S. Geological Survey?s real-time hydrologic database), formats the data for use with SSARR, initiates SSARR model runs, displays alerts for impending floods, and provides utilities to display the simulated and observed data. An on-screen map of the basin and a series of menu items provide the user wi
Chu, Byoung-Sun; Ngo, Thao P T; Cheng, Brian B; Dain, Stephen J
2014-07-01
The accuracy and precision of any instrument should not be taken for granted. While there is an international standard for checking focimeters, there is no report of any study on their performance. A sample set of 51 focimeters (11 brands), were used to measure the spherical power of a set of lenses and the prismatic power of two lenses complying with ISO 9342-1:2005 and other calibrated prismatic lenses and the spherical power of some grey filters. The mean measured spherical power corresponded very closely with the calibrated values; however, the spread of results was substantial and 10 focimeters did not comply with ISO 8598:1996. The measurement of prism was much more accurate and precise and all the focimeters complied easily. With the grey filters, about one-third of the focimeters either showed erratic reading or an error with the equivalent of category 4 sunglasses. On the other hand, nine focimeters had stable and accurate reading on a filter with a luminous transmittance of 0.5 per cent. These results confirm that, in common with all other measurement instruments, there is a need to ensure that a focimeter is reading accurately and precisely over the range of refractive powers and luminous transmittances. The accurate and precise performance of an automated focimeter over its working life cannot be assumed. Checking before purchase with a set of calibrated lenses and some dark sunglass tints will indicate the suitability of a focimeter. Routine checking with the calibrated lenses will inform the users if a focimeter continues to indicate accurately. © 2014 The Authors. Clinical and Experimental Optometry © 2014 Optometrists Association Australia.
APEX calibration facility: status and first commissioning results
NASA Astrophysics Data System (ADS)
Suhr, Birgit; Fries, Jochen; Gege, Peter; Schwarzer, Horst
2006-09-01
The paper presents the current status of the operational calibration facility that can be used for radiometric, spectral and geometric on-ground characterisation and calibration of imaging spectrometers. The European Space Agency (ESA) co-funded this establishment at DLR Oberpfaffenhofen within the framework of the hyper-spectral imaging spectrometer Airborne Prism Experiment (APEX). It was designed to fulfil the requirements for calibration of APEX, but can also be used for other imaging spectrometers. A description of the hardware set-up of the optical bench will be given. Signals from two sides can alternatively be sent to the hyper-spectral sensor under investigation. Frome one side the spatial calibration will be done by using an off-axis collimator and six slits of different width and orientation to measure the line spread function (LSF) in flight direction as well as across flight direction. From the other side the spectral calibration will be performed. A monochromator provides radiation in a range from 380 nm to 13 μm with a bandwidth between 0.1 nm in the visible and 5 nm in the thermal infrared. For the relative radiometric calibration a large integrating sphere of 1.65 m diameter and exit port size of 55 cm × 40 cm is used. The absolute radiometric calibration will be done using a small integrating sphere with 50 cm diameter that is regularly calibrated according to national standards. This paper describes the hardware components and their accuracy, and it presents the software interface for automation of the measurements.
Computer vision applications for coronagraphic optical alignment and image processing.
Savransky, Dmitry; Thomas, Sandrine J; Poyneer, Lisa A; Macintosh, Bruce A
2013-05-10
Modern coronagraphic systems require very precise alignment between optical components and can benefit greatly from automated image processing. We discuss three techniques commonly employed in the fields of computer vision and image analysis as applied to the Gemini Planet Imager, a new facility instrument for the Gemini South Observatory. We describe how feature extraction and clustering methods can be used to aid in automated system alignment tasks, and also present a search algorithm for finding regular features in science images used for calibration and data processing. Along with discussions of each technique, we present our specific implementation and show results of each one in operation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fowler, E. E.; Sellers, T. A.; Lu, B.
Purpose: The Breast Imaging Reporting and Data System (BI-RADS) breast composition descriptors are used for standardized mammographic reporting and are assessed visually. This reporting is clinically relevant because breast composition can impact mammographic sensitivity and is a breast cancer risk factor. New techniques are presented and evaluated for generating automated BI-RADS breast composition descriptors using both raw and calibrated full field digital mammography (FFDM) image data.Methods: A matched case-control dataset with FFDM images was used to develop three automated measures for the BI-RADS breast composition descriptors. Histograms of each calibrated mammogram in the percent glandular (pg) representation were processed tomore » create the new BR{sub pg} measure. Two previously validated measures of breast density derived from calibrated and raw mammograms were converted to the new BR{sub vc} and BR{sub vr} measures, respectively. These three measures were compared with the radiologist-reported BI-RADS compositions assessments from the patient records. The authors used two optimization strategies with differential evolution to create these measures: method-1 used breast cancer status; and method-2 matched the reported BI-RADS descriptors. Weighted kappa (κ) analysis was used to assess the agreement between the new measures and the reported measures. Each measure's association with breast cancer was evaluated with odds ratios (ORs) adjusted for body mass index, breast area, and menopausal status. ORs were estimated as per unit increase with 95% confidence intervals.Results: The three BI-RADS measures generated by method-1 had κ between 0.25–0.34. These measures were significantly associated with breast cancer status in the adjusted models: (a) OR = 1.87 (1.34, 2.59) for BR{sub pg}; (b) OR = 1.93 (1.36, 2.74) for BR{sub vc}; and (c) OR = 1.37 (1.05, 1.80) for BR{sub vr}. The measures generated by method-2 had κ between 0.42–0.45. Two of these measures were significantly associated with breast cancer status in the adjusted models: (a) OR = 1.95 (1.24, 3.09) for BR{sub pg}; (b) OR = 1.42 (0.87, 2.32) for BR{sub vc}; and (c) OR = 2.13 (1.22, 3.72) for BR{sub vr}. The radiologist-reported measures from the patient records showed a similar association, OR = 1.49 (0.99, 2.24), although only borderline statistically significant.Conclusions: A general framework was developed and validated for converting calibrated mammograms and continuous measures of breast density to fully automated approximations for the BI-RADS breast composition descriptors. The techniques are general and suitable for a broad range of clinical and research applications.« less
NASA Astrophysics Data System (ADS)
Robson, E. I.; Stevens, J. A.; Jenness, T.
2001-11-01
Calibrated data for 65 flat-spectrum extragalactic radio sources are presented at a wavelength of 850μm, covering a three-year period from 1997 April. The data, obtained from the James Clerk Maxwell Telescope using the SCUBA camera in pointing mode, were analysed using an automated pipeline process based on the Observatory Reduction and Acquisition Control-Data Reduction (orac-dr) system. This paper describes the techniques used to analyse and calibrate the data, and presents the data base of results along with a representative sample of the better-sampled light curves.
ISS Payload Racks Automated Flow Control Calibration Method
NASA Technical Reports Server (NTRS)
Simmonds, Boris G.
2003-01-01
Payload Racks utilize MTL and/or LTL station water for cooling of payloads and avionics. Flow control range from valves of fully closed, to up to 300 Ibmhr. Instrument accuracies are as high as f 7.5 Ibm/hr for flow sensors and f 3 Ibm/hr for valve controller, for a total system accuracy of f 10.5 Ibm/hr. Improved methodology was developed, tested and proven that reduces accuracy of the commanded flows to less than f 1 Ibmhr. Uethodology could be packed in a "calibration kit" for on- orbit flow sensor checkout and recalibration, extending the rack operations before return to earth. -
NASA Astrophysics Data System (ADS)
Augustine, John A.; Cornwall, Christopher R.; Hodges, Gary B.; Long, Charles N.; Medina, Carlos I.; Deluisi, John J.
2003-02-01
Over the past decade, networks of Multifilter Rotating Shadowband Radiometers (MFRSR) and automated sun photometers have been established in the United States to monitor aerosol properties. The MFRSR alternately measures diffuse and global irradiance in six narrow spectral bands and a broadband channel of the solar spectrum, from which the direct normal component for each may be inferred. Its 500-nm channel mimics sun photometer measurements and thus is a source of aerosol optical depth information. Automatic data reduction methods are needed because of the high volume of data produced by the MFRSR. In addition, these instruments are often not calibrated for absolute irradiance and must be periodically calibrated for optical depth analysis using the Langley method. This process involves extrapolation to the signal the MFRSR would measure at the top of the atmosphere (I0). Here, an automated clear-sky identification algorithm is used to screen MFRSR 500-nm measurements for suitable calibration data. The clear-sky MFRSR measurements are subsequently used to construct a set of calibration Langley plots from which a mean I0 is computed. This calibration I0 may be subsequently applied to any MFRSR 500-nm measurement within the calibration period to retrieve aerosol optical depth. This method is tested on a 2-month MFRSR dataset from the Table Mountain NOAA Surface Radiation Budget Network (SURFRAD) station near Boulder, Colorado. The resultant I0 is applied to two Asian dust-related high air pollution episodes that occurred within the calibration period on 13 and 17 April 2001. Computed aerosol optical depths for 17 April range from approximately 0.30 to 0.40, and those for 13 April vary from background levels to >0.30. Errors in these retrievals were estimated to range from ±0.01 to ±0.05, depending on the solar zenith angle. The calculations are compared with independent MFRSR-based aerosol optical depth retrievals at the Pawnee National Grasslands, 85 km to the northeast of Table Mountain, and to sun-photometer-derived aerosol optical depths at the National Renewable Energy Laboratory in Golden, Colorado, 50 km to the south. Both the Table Mountain and Golden stations are situated within a few kilometers of the Front Range of the Rocky Mountains, whereas the Pawnee station is on the eastern plains of Colorado. Time series of aerosol optical depth from Pawnee and Table Mountain stations compare well for 13 April when, according to the Naval Aerosol Analysis and Prediction System, an upper-level Asian dust plume enveloped most of Colorado. Aerosol optical depths at the Golden station for that event are generally greater than those at Table Mountain and Pawnee, possibly because of the proximity of Golden to Denver's urban aerosol plume. The dust over Colorado was primarily surface based on 17 April. On that day, aerosol optical depths at Table Mountain and Golden are similar but are 2 times the magnitude of those at Pawnee. This difference is attributed to meteorological conditions that favored air stagnation in the planetary boundary layer along the Front Range, and a west-to-east gradient in aerosol concentration. The magnitude and timing of the aerosol optical depth measurements at Table Mountain for these events are found to be consistent with independent measurements made at NASA Aerosol Robotic Network (AERONET) stations at Missoula, Montana, and at Bondville, Illinois.
Automated data acquisition technology development:Automated modeling and control development
NASA Technical Reports Server (NTRS)
Romine, Peter L.
1995-01-01
This report documents the completion of, and improvements made to, the software developed for automated data acquisition and automated modeling and control development on the Texas Micro rackmounted PC's. This research was initiated because a need was identified by the Metal Processing Branch of NASA Marshall Space Flight Center for a mobile data acquisition and data analysis system, customized for welding measurement and calibration. Several hardware configurations were evaluated and a PC based system was chosen. The Welding Measurement System (WMS), is a dedicated instrument strickly for use of data acquisition and data analysis. In addition to the data acquisition functions described in this thesis, WMS also supports many functions associated with process control. The hardware and software requirements for an automated acquisition system for welding process parameters, welding equipment checkout, and welding process modeling were determined in 1992. From these recommendations, NASA purchased the necessary hardware and software. The new welding acquisition system is designed to collect welding parameter data and perform analysis to determine the voltage versus current arc-length relationship for VPPA welding. Once the results of this analysis are obtained, they can then be used to develop a RAIL function to control welding startup and shutdown without torch crashing.
THE RABIT: A RAPID AUTOMATED BIODOSIMETRY TOOL FOR RADIOLOGICAL TRIAGE
Garty, Guy; Chen, Youhua; Salerno, Alessio; Turner, Helen; Zhang, Jian; Lyulko, Oleksandra; Bertucci, Antonella; Xu, Yanping; Wang, Hongliang; Simaan, Nabil; Randers-Pehrson, Gerhard; Yao, Y. Lawrence; Amundson, Sally A.; Brenner, David J.
2010-01-01
In response to the recognized need for high throughput biodosimetry methods for use after large scale radiological events, a logical approach is complete automation of standard biodosimetric assays that are currently performed manually. We describe progress to date on the RABIT (Rapid Automated BIodosimetry Tool), designed to score micronuclei or γ-H2AX fluorescence in lymphocytes derived from a single drop of blood from a fingerstick. The RABIT system is designed to be completely automated, from the input of the capillary blood sample into the machine, to the output of a dose estimate. Improvements in throughput are achieved through use of a single drop of blood, optimization of the biological protocols for in-situ analysis in multi-well plates, implementation of robotic plate and liquid handling, and new developments in high-speed imaging. Automating well-established bioassays represents a promising approach to high-throughput radiation biodosimetry, both because high throughputs can be achieved, but also because the time to deployment is potentially much shorter than for a new biological assay. Here we describe the development of each of the individual modules of the RABIT system, and show preliminary data from key modules. Ongoing is system integration, followed by calibration and validation. PMID:20065685
John R. Butnor; Kurt H. Johnsen
2004-01-01
Measurement of soil respiration to quantify ecosystem carbon cyclingrequires absolute, not relative, estimates of soil CO2 efflux. We describe a novel, automated efflux apparatus that can be used to test the accuracy of chamber-based soil respiration measurements by generating known CO2 fluxes. Artificial soil is supported...
How Important Is Content in the Ratings of Essay Assessments?
ERIC Educational Resources Information Center
Shermis, Mark D.; Shneyderman, Aleksandr; Attali, Yigal
2008-01-01
This study was designed to examine the extent to which "content" accounts for variance in scores assigned in automated essay scoring protocols. Specifically it was hypothesised that certain writing genre would emphasise content more than others. Data were drawn from 1668 essays calibrated at two grade levels (6 and 8) using "e-rater[TM]", an…
Tamarisk Mapping and Monitoring Using High Resolution Satellite Imagery
Jason W. San Souci; John T. Doyle
2006-01-01
QuickBird high resolution multispectral satellite imagery (60 cm GSD, 4 spectral bands) and calibrated products from DigitalGlobeâs AgroWatch program were used as inputs to Visual Learning Systemâs Feature Analyst automated feature extraction software to map localized occurrences of pervasive and aggressive Tamarisk (Tamarix ramosissima), an invasive...
Device and methods for "gold standard" registration of clinical 3D and 2D cerebral angiograms
NASA Astrophysics Data System (ADS)
Madan, Hennadii; Likar, Boštjan; Pernuš, Franjo; Å piclin, Žiga
2015-03-01
Translation of any novel and existing 3D-2D image registration methods into clinical image-guidance systems is limited due to lack of their objective validation on clinical image datasets. The main reason is that, besides the calibration of the 2D imaging system, a reference or "gold standard" registration is very difficult to obtain on clinical image datasets. In the context of cerebral endovascular image-guided interventions (EIGIs), we present a calibration device in the form of a headband with integrated fiducial markers and, secondly, propose an automated pipeline comprising 3D and 2D image processing, analysis and annotation steps, the result of which is a retrospective calibration of the 2D imaging system and an optimal, i.e., "gold standard" registration of 3D and 2D images. The device and methods were used to create the "gold standard" on 15 datasets of 3D and 2D cerebral angiograms, whereas each dataset was acquired on a patient undergoing EIGI for either aneurysm coiling or embolization of arteriovenous malformation. The use of the device integrated seamlessly in the clinical workflow of EIGI. While the automated pipeline eliminated all manual input or interactive image processing, analysis or annotation. In this way, the time to obtain the "gold standard" was reduced from 30 to less than one minute and the "gold standard" of 3D-2D registration on all 15 datasets of cerebral angiograms was obtained with a sub-0.1 mm accuracy.
Automated Calibration For Numerical Models Of Riverflow
NASA Astrophysics Data System (ADS)
Fernandez, Betsaida; Kopmann, Rebekka; Oladyshkin, Sergey
2017-04-01
Calibration of numerical models is fundamental since the beginning of all types of hydro system modeling, to approximate the parameters that can mimic the overall system behavior. Thus, an assessment of different deterministic and stochastic optimization methods is undertaken to compare their robustness, computational feasibility, and global search capacity. Also, the uncertainty of the most suitable methods is analyzed. These optimization methods minimize the objective function that comprises synthetic measurements and simulated data. Synthetic measurement data replace the observed data set to guarantee an existing parameter solution. The input data for the objective function derivate from a hydro-morphological dynamics numerical model which represents an 180-degree bend channel. The hydro- morphological numerical model shows a high level of ill-posedness in the mathematical problem. The minimization of the objective function by different candidate methods for optimization indicates a failure in some of the gradient-based methods as Newton Conjugated and BFGS. Others reveal partial convergence, such as Nelder-Mead, Polak und Ribieri, L-BFGS-B, Truncated Newton Conjugated, and Trust-Region Newton Conjugated Gradient. Further ones indicate parameter solutions that range outside the physical limits, such as Levenberg-Marquardt and LeastSquareRoot. Moreover, there is a significant computational demand for genetic optimization methods, such as Differential Evolution and Basin-Hopping, as well as for Brute Force methods. The Deterministic Sequential Least Square Programming and the scholastic Bayes Inference theory methods present the optimal optimization results. keywords: Automated calibration of hydro-morphological dynamic numerical model, Bayesian inference theory, deterministic optimization methods.
Calibration development strategies for the Daniel K. Inouye Solar Telescope (DKIST) data center
NASA Astrophysics Data System (ADS)
Watson, Fraser T.; Berukoff, Steven J.; Hays, Tony; Reardon, Kevin; Speiss, Daniel J.; Wiant, Scott
2016-07-01
The Daniel K. Inouye Solar Telescope (DKIST), currently under construction on Haleakalā, in Maui, Hawai'i will be the largest solar telescope in the world and will use adaptive optics to provide the highest resolution view of the Sun to date. It is expected that DKIST data will enable significant and transformative discoveries that will dramatically increase our understanding of the Sun and its effects on the Sun-Earth environment. As a result of this, it is a priority of the DKIST Data Center team at the National Solar Observatory (NSO) to be able to deliver timely and accurately calibrated data to the astronomical community for further analysis. This will require a process which allows the Data Center to develop calibration pipelines for all of the facility instruments, taking advantage of similarities between them, as well as similarities to current generation instruments. There will also be a challenges which are addressed in this article, such as the large volume of data expected, and the importance of supporting both manual and automated calibrations. This paper will detail the current calibration development strategies being used by the Data Center team at the National Solar Observatory to manage this calibration effort, so as to ensure delivery of high quality scientific data routinely to users.
Automated tracking of lava lake level using thermal images at Kīlauea Volcano, Hawai’i
Patrick, Matthew R.; Swanson, Don; Orr, Tim R.
2016-01-01
Tracking the level of the lava lake in Halema‘uma‘u Crater, at the summit of Kīlauea Volcano, Hawai’i, is an essential part of monitoring the ongoing eruption and forecasting potentially hazardous changes in activity. We describe a simple automated image processing routine that analyzes continuously-acquired thermal images of the lava lake and measures lava level. The method uses three image segmentation approaches, based on edge detection, short-term change analysis, and composite temperature thresholding, to identify and track the lake margin in the images. These relative measurements from the images are periodically calibrated with laser rangefinder measurements to produce real-time estimates of lake elevation. Continuous, automated tracking of the lava level has been an important tool used by the U.S. Geological Survey’s Hawaiian Volcano Observatory since 2012 in real-time operational monitoring of the volcano and its hazard potential.
Phonotactic Diversity Predicts the Time Depth of the World’s Language Families
Rama, Taraka
2013-01-01
The ASJP (Automated Similarity Judgment Program) described an automated, lexical similarity-based method for dating the world’s language groups using 52 archaeological, epigraphic and historical calibration date points. The present paper describes a new automated dating method, based on phonotactic diversity. Unlike ASJP, our method does not require any information on the internal classification of a language group. Also, the method can use all the available word lists for a language and its dialects eschewing the debate on ‘language’ vs. ‘dialect’. We further combine these dates and provide a new baseline which, to our knowledge, is the best one. We make a systematic comparison of our method, ASJP’s dating procedure, and combined dates. We predict time depths for world’s language families and sub-families using this new baseline. Finally, we explain our results in the model of language change given by Nettle. PMID:23691003
Hardware fault insertion and instrumentation system: Mechanization and validation
NASA Technical Reports Server (NTRS)
Benson, J. W.
1987-01-01
Automated test capability for extensive low-level hardware fault insertion testing is developed. The test capability is used to calibrate fault detection coverage and associated latency times as relevant to projecting overall system reliability. Described are modifications made to the NASA Ames Reconfigurable Flight Control System (RDFCS) Facility to fully automate the total test loop involving the Draper Laboratories' Fault Injector Unit. The automated capability provided included the application of sequences of simulated low-level hardware faults, the precise measurement of fault latency times, the identification of fault symptoms, and bulk storage of test case results. A PDP-11/60 served as a test coordinator, and a PDP-11/04 as an instrumentation device. The fault injector was controlled by applications test software in the PDP-11/60, rather than by manual commands from a terminal keyboard. The time base was especially developed for this application to use a variety of signal sources in the system simulator.
Optimizing Decision Preparedness by Adapting Scenario Complexity and Automating Scenario Generation
NASA Technical Reports Server (NTRS)
Dunne, Rob; Schatz, Sae; Flore, Stephen M.; Nicholson, Denise
2011-01-01
Klein's recognition-primed decision (RPD) framework proposes that experts make decisions by recognizing similarities between current decision situations and previous decision experiences. Unfortunately, military personnel arQ often presented with situations that they have not experienced before. Scenario-based training (S8T) can help mitigate this gap. However, SBT remains a challenging and inefficient training approach. To address these limitations, the authors present an innovative formulation of scenario complexity that contributes to the larger research goal of developing an automated scenario generation system. This system will enable trainees to effectively advance through a variety of increasingly complex decision situations and experiences. By adapting scenario complexities and automating generation, trainees will be provided with a greater variety of appropriately calibrated training events, thus broadening their repositories of experience. Preliminary results from empirical testing (N=24) of the proof-of-concept formula are presented, and future avenues of scenario complexity research are also discussed.
Stepwise Regression Analysis of MDOE Balance Calibration Data Acquired at DNW
NASA Technical Reports Server (NTRS)
DeLoach, RIchard; Philipsen, Iwan
2007-01-01
This paper reports a comparison of two experiment design methods applied in the calibration of a strain-gage balance. One features a 734-point test matrix in which loads are varied systematically according to a method commonly applied in aerospace research and known in the literature of experiment design as One Factor At a Time (OFAT) testing. Two variations of an alternative experiment design were also executed on the same balance, each with different features of an MDOE experiment design. The Modern Design of Experiments (MDOE) is an integrated process of experiment design, execution, and analysis applied at NASA's Langley Research Center to achieve significant reductions in cycle time, direct operating cost, and experimental uncertainty in aerospace research generally and in balance calibration experiments specifically. Personnel in the Instrumentation and Controls Department of the German Dutch Wind Tunnels (DNW) have applied MDOE methods to evaluate them in the calibration of a balance using an automated calibration machine. The data have been sent to Langley Research Center for analysis and comparison. This paper reports key findings from this analysis. The chief result is that a 100-point calibration exploiting MDOE principles delivered quality comparable to a 700+ point OFAT calibration with significantly reduced cycle time and attendant savings in direct and indirect costs. While the DNW test matrices implemented key MDOE principles and produced excellent results, additional MDOE concepts implemented in balance calibrations at Langley Research Center are also identified and described.
Calibration of an Outdoor Distributed Camera Network with a 3D Point Cloud
Ortega, Agustín; Silva, Manuel; Teniente, Ernesto H.; Ferreira, Ricardo; Bernardino, Alexandre; Gaspar, José; Andrade-Cetto, Juan
2014-01-01
Outdoor camera networks are becoming ubiquitous in critical urban areas of the largest cities around the world. Although current applications of camera networks are mostly tailored to video surveillance, recent research projects are exploiting their use to aid robotic systems in people-assisting tasks. Such systems require precise calibration of the internal and external parameters of the distributed camera network. Despite the fact that camera calibration has been an extensively studied topic, the development of practical methods for user-assisted calibration that minimize user intervention time and maximize precision still pose significant challenges. These camera systems have non-overlapping fields of view, are subject to environmental stress, and are likely to suffer frequent recalibration. In this paper, we propose the use of a 3D map covering the area to support the calibration process and develop an automated method that allows quick and precise calibration of a large camera network. We present two cases of study of the proposed calibration method: one is the calibration of the Barcelona Robot Lab camera network, which also includes direct mappings (homographies) between image coordinates and world points in the ground plane (walking areas) to support person and robot detection and localization algorithms. The second case consist of improving the GPS positioning of geo-tagged images taken with a mobile device in the Facultat de Matemàtiques i Estadística (FME) patio at the Universitat Politècnica de Catalunya (UPC). PMID:25076221
Calibration of an outdoor distributed camera network with a 3D point cloud.
Ortega, Agustín; Silva, Manuel; Teniente, Ernesto H; Ferreira, Ricardo; Bernardino, Alexandre; Gaspar, José; Andrade-Cetto, Juan
2014-07-29
Outdoor camera networks are becoming ubiquitous in critical urban areas of the largest cities around the world. Although current applications of camera networks are mostly tailored to video surveillance, recent research projects are exploiting their use to aid robotic systems in people-assisting tasks. Such systems require precise calibration of the internal and external parameters of the distributed camera network. Despite the fact that camera calibration has been an extensively studied topic, the development of practical methods for user-assisted calibration that minimize user intervention time and maximize precision still pose significant challenges. These camera systems have non-overlapping fields of view, are subject to environmental stress, and are likely to suffer frequent recalibration. In this paper, we propose the use of a 3D map covering the area to support the calibration process and develop an automated method that allows quick and precise calibration of a large camera network. We present two cases of study of the proposed calibration method: one is the calibration of the Barcelona Robot Lab camera network, which also includes direct mappings (homographies) between image coordinates and world points in the ground plane (walking areas) to support person and robot detection and localization algorithms. The second case consist of improving the GPS positioning of geo-tagged images taken with a mobile device in the Facultat de Matemàtiques i Estadística (FME) patio at the Universitat Politècnica de Catalunya (UPC).
NASA Technical Reports Server (NTRS)
Kruse, Fred A.; Dwyer, John L.
1993-01-01
The Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) measures reflected light in 224 contiguous spectra bands in the 0.4 to 2.45 micron region of the electromagnetic spectrum. Numerous studies have used these data for mineralogic identification and mapping based on the presence of diagnostic spectral features. Quantitative mapping requires conversion of the AVIRIS data to physical units (usually reflectance) so that analysis results can be compared and validated with field and laboratory measurements. This study evaluated two different AVIRIS calibration techniques to ground reflectance: an empirically-based method and an atmospheric model based method to determine their effects on quantitative scientific analyses. Expert system analysis and linear spectral unmixing were applied to both calibrated data sets to determine the effect of the calibration on the mineral identification and quantitative mapping results. Comparison of the image-map results and image reflectance spectra indicate that the model-based calibrated data can be used with automated mapping techniques to produce accurate maps showing the spatial distribution and abundance of surface mineralogy. This has positive implications for future operational mapping using AVIRIS or similar imaging spectrometer data sets without requiring a priori knowledge.
Towards a robust green astro-comb for Earth-like exoplanet searches
NASA Astrophysics Data System (ADS)
Ravi, Aakash; Martin, Leopoldo; Phillips, David; Langellier, Nicholas; Milbourne, Timothy; Dolliff, Christian; Walsworth, Ronald
2017-04-01
The detection of exoplanets using the radial velocity (RV) method has become a very exciting and active area of research. Detecting Earth-like planets, however, is still very challenging as it requires extremely precise calibration of the spectrographs used in such measurements. To address this challenge, we employ a visible wavelength frequency comb - referenced to the global positioning system - as a calibration source. Our comb calibrator is realized by spectrally broadening and shifting the output of a 1 GHz repetition rate modelocked Ti:sapphire laser using a photonic crystal fiber and then filtering the comb lines to create a 16 GHz-spacing comb. This system has been implemented at the TNG telescope on La Palma to calibrate the HARPS-N spectrograph. However, the complexity of the system has thus far prevented its routine use as it requires frequency comb specialists to be on site during measurements. Here, we propose some automation strategies and present preliminary results from our efforts. We also discuss ongoing comb-calibrated astrophysical observations, including measurements of the Sun. The solar measurements are part of an effort to understand stellar noise sources in the RV data and demonstrate the sensitivity of the instrument to detect terrestrial exoplanets.
Herzog, D.C.
1990-01-01
A comparison is made of geomagnetic calibration data obtained from a high-sensitivity proton magnetometer enclosed within an orthogonal bias coil system, with data obtained from standard procedures at a mid-latitude U.S. Geological Survey magnetic observatory using a quartz horizontal magnetometer, a Ruska magnetometer, and a total field magnetometer. The orthogonal coil arrangement is used with the proton magnetometer to provide Deflected-Inclination-Deflected-Declination (DIDD) data from which quasi-absolute values of declination, horizontal intensity, and vertical intensity can be derived. Vector magnetometers provide the ordinate values to yield baseline calibrations for both the DIDD and standard observatory processes. Results obtained from a prototype system over a period of several months indicate that the DIDD unit can furnish adequate absolute field values for maintaining observatory calibration data, thus providing baseline control for unattended, remote stations. ?? 1990.
Hydrogen calibration of GD-spectrometer using Zr-1Nb alloy
NASA Astrophysics Data System (ADS)
Mikhaylov, Andrey A.; Priamushko, Tatiana S.; Babikhina, Maria N.; Kudiiarov, Victor N.; Heller, Rene; Laptev, Roman S.; Lider, Andrey M.
2018-02-01
To study the hydrogen distribution in Zr-1Nb alloy (Э110 alloy) GD-OES was applied in this work. Qualitative analysis needs the standard samples with hydrogen. However, the standard samples with high concentrations of hydrogen in the zirconium alloy which would meet the requirements of the shape, size are absent. In this work method of Zr + H calibration samples production was performed at the first time. Automated Complex Gas Reaction Controller was used for samples hydrogenation. To calculate the parameters of post-hydrogenation incubation of the samples in an inert gas atmosphere the diffusion equations were used. Absolute hydrogen concentrations in the samples were determined by melting in the inert gas atmosphere using RHEN602 analyzer (LECO Company). Hydrogen distribution was studied using nuclear reaction analysis (HZDR, Dresden, Germany). RF GD-OES was used for calibration. The depth of the craters was measured with the help of a Hommel-Etamic profilometer by Jenoptik, Germany.
Gamma/Hadron Separation for the HAWC Observatory
NASA Astrophysics Data System (ADS)
Gerhardt, Michael J.
The High-Altitude Water Cherenkov (HAWC) Observatory is a gamma-ray observatory sensitive to gamma rays from 100 GeV to 100 TeV with an instantaneous field of view of ˜2 sr. It is located on the Sierra Negra plateau in Mexico at an elevation of 4,100 m and began full operation in March 2015. The purpose of the detector is to study relativistic particles that are produced by interstellar and intergalactic objects such as: pulsars, supernova remnants, molecular clouds, black holes and more. To achieve optimal angular resolution, energy reconstruction and cosmic ray background suppression for the extensive air showers detected by HAWC, good timing and charge calibration are crucial, as well as optimization of quality cuts on background suppression variables. Additions to the HAWC timing calibration, in particular automating the calibration quality checks and a new method for background suppression using a multivariate analysis are presented in this thesis.
Development of gait segmentation methods for wearable foot pressure sensors.
Crea, S; De Rossi, S M M; Donati, M; Reberšek, P; Novak, D; Vitiello, N; Lenzi, T; Podobnik, J; Munih, M; Carrozza, M C
2012-01-01
We present an automated segmentation method based on the analysis of plantar pressure signals recorded from two synchronized wireless foot insoles. Given the strict limits on computational power and power consumption typical of wearable electronic components, our aim is to investigate the capability of a Hidden Markov Model machine-learning method, to detect gait phases with different levels of complexity in the processing of the wearable pressure sensors signals. Therefore three different datasets are developed: raw voltage values, calibrated sensor signals and a calibrated estimation of total ground reaction force and position of the plantar center of pressure. The method is tested on a pool of 5 healthy subjects, through a leave-one-out cross validation. The results show high classification performances achieved using estimated biomechanical variables, being on average the 96%. Calibrated signals and raw voltage values show higher delays and dispersions in phase transition detection, suggesting a lower reliability for online applications.
High-throughput accurate-wavelength lens-based visible spectrometer.
Bell, Ronald E; Scotti, Filippo
2010-10-01
A scanning visible spectrometer has been prototyped to complement fixed-wavelength transmission grating spectrometers for charge exchange recombination spectroscopy. Fast f/1.8 200 mm commercial lenses are used with a large 2160 mm(-1) grating for high throughput. A stepping-motor controlled sine drive positions the grating, which is mounted on a precision rotary table. A high-resolution optical encoder on the grating stage allows the grating angle to be measured with an absolute accuracy of 0.075 arc sec, corresponding to a wavelength error ≤0.005 Å. At this precision, changes in grating groove density due to thermal expansion and variations in the refractive index of air are important. An automated calibration procedure determines all the relevant spectrometer parameters to high accuracy. Changes in bulk grating temperature, atmospheric temperature, and pressure are monitored between the time of calibration and the time of measurement to ensure a persistent wavelength calibration.
High accuracy wavelength calibration for a scanning visible spectrometer.
Scotti, Filippo; Bell, Ronald E
2010-10-01
Spectroscopic applications for plasma velocity measurements often require wavelength accuracies ≤0.2 Å. An automated calibration, which is stable over time and environmental conditions without the need to recalibrate after each grating movement, was developed for a scanning spectrometer to achieve high wavelength accuracy over the visible spectrum. This method fits all relevant spectrometer parameters using multiple calibration spectra. With a stepping-motor controlled sine drive, an accuracy of ∼0.25 Å has been demonstrated. With the addition of a high resolution (0.075 arc sec) optical encoder on the grating stage, greater precision (∼0.005 Å) is possible, allowing absolute velocity measurements within ∼0.3 km/s. This level of precision requires monitoring of atmospheric temperature and pressure and of grating bulk temperature to correct for changes in the refractive index of air and the groove density, respectively.
JWST Associations overview: automated generation of combined products
NASA Astrophysics Data System (ADS)
Alexov, Anastasia; Swade, Daryl; Bushouse, Howard; Diaz, Rosa; Eisenhamer, Jonathan; Hack, Warren; Kyprianou, Mark; Levay, Karen; Rahmani, Christopher; Swam, Mike; Valenti, Jeff
2018-01-01
We are presenting the design of the James Webb Space Telescope (JWST) Data Management System (DMS) automated processing of Associations. An Association captures the relationship between exposures and higher level data products, such as combined mosaics created from dithered and tiled observations. The astronomer’s intent is captured within the Proposal Planning System (PPS) and provided to DMS as candidate associations. These candidates are converted into Association Pools and Association Generator Tables that serve as input to automated processing which create the combined data products. Association Pools are generated to capture a list of exposures that could potentially form associations and provide relevant information about those exposures. The Association Generator using definitions on groupings creates one or more Association Tables from a single input Association Pool. Each Association Table defines a set of exposures to be combined and the ruleset of the combination to be performed; the calibration software creates Associated data products based on these input tables. The initial design produces automated Associations within a proposal. Additionally this JWST overall design is conducive to eventually produce Associations for observations from multiple proposals, similar to the Hubble Legacy Archive (HLA).
Wireless energizing system for an automated implantable sensor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Swain, Biswaranjan; Nayak, Praveen P.; Kar, Durga P.
The wireless drive of an automated implantable electronic sensor has been explored for health monitoring applications. The proposed system comprises of an automated biomedical sensing system which is energized through resonant inductive coupling. The implantable sensor unit is able to monitor the body temperature parameter and sends back the corresponding telemetry data wirelessly to the data recoding unit. It has been observed that the wireless power delivery system is capable of energizing the automated biomedical implantable electronic sensor placed over a distance of 3 cm from the power transmitter with an energy transfer efficiency of 26% at the operating resonantmore » frequency of 562 kHz. This proposed method ensures real-time monitoring of different human body temperatures around the clock. The monitored temperature data have been compared with a calibrated temperature measurement system to ascertain the accuracy of the proposed system. The investigated technique can also be useful for monitoring other body parameters such as blood pressure, bladder pressure, and physiological signals of the patient in vivo using various implantable sensors.« less
Development and implementation of an automated quantitative film digitizer quality control program
NASA Astrophysics Data System (ADS)
Fetterly, Kenneth A.; Avula, Ramesh T. V.; Hangiandreou, Nicholas J.
1999-05-01
A semi-automated, quantitative film digitizer quality control program that is based on the computer analysis of the image data from a single digitized test film was developed. This program includes measurements of the geometric accuracy, optical density performance, signal to noise ratio, and presampled modulation transfer function. The variability of the measurements was less than plus or minus 5%. Measurements were made on a group of two clinical and two laboratory laser film digitizers during a trial period of approximately four months. Quality control limits were established based on clinical necessity, vendor specifications and digitizer performance. During the trial period, one of the digitizers failed the performance requirements and was corrected by calibration.
Khalil, M. A.K. [Oregon Graduate Institute of Science and Technology; Rasmussen, R. A. [Oregon Graduate Institute of Science and Technology
1994-01-01
This data base presents continuous automated atmospheric methane (CH4) measurements taken at the atmospheric monitoring facility in Cape Meares, Oregon, by the Oregon Graduate Institute of Science and Technology. The Cape Meares data represent some 119,000 individual atmospheric methane measurements carried out during 1979-1992. Analysis of ambient air (collected 12 to 72 times daily) was carried out by means of an automated sampling and measurement system, using the method of gas chromatography and flame ionization detection. Despite the long course of the record and the large number of individual measurements, these data may all be linked to a single absolute calibration standard.
NASA Astrophysics Data System (ADS)
Wi, S.; Ray, P. A.; Brown, C.
2015-12-01
A software package developed to facilitate building distributed hydrologic models in a modular modeling system is presented. The software package provides a user-friendly graphical user interface that eases its practical use in water resources-related research and practice. The modular modeling system organizes the options available to users when assembling models according to the stages of hydrological cycle, such as potential evapotranspiration, soil moisture accounting, and snow/glacier melting processes. The software is intended to be a comprehensive tool that simplifies the task of developing, calibrating, validating, and using hydrologic models through the inclusion of intelligent automation to minimize user effort, and reduce opportunities for error. Processes so far automated include the definition of system boundaries (i.e., watershed delineation), climate and geographical input generation, and parameter calibration. Built-in post-processing toolkits greatly improve the functionality of the software as a decision support tool for water resources system management and planning. Example post-processing toolkits enable streamflow simulation at ungauged sites with predefined model parameters, and perform climate change risk assessment by means of the decision scaling approach. The software is validated through application to watersheds representing a variety of hydrologic regimes.
Automated test-site radiometer for vicarious calibration
NASA Astrophysics Data System (ADS)
Li, Xin; Yin, Ya-peng; Liu, En-chao; Zhang, Yan-na; Xun, Li-na; Wei, Wei; Zhang, Zhi-peng; Qiu, Gang-gang; Zhang, Quan; Zheng, Xiao-bing
2014-11-01
In order to realize unmanned vicarious calibration, Automated Test-site Radiometer (ATR) was developed for surface reflectance measurements. ATR samples the spectrum from 400nm-1600 nm with 8 interference filters coupled with silicon and InGaAs detectors. The field of view each channel is 10 ° with parallel optical axis. One SWIR channel lies in the center and the other seven VNIR channels are on the circle of 4.8cm diameters which guarantee each channel to view nearly the same section of ground. The optical head as a whole is temperature controlled utilizing a TE cooler for greater stability and lower noise. ATR is powered by a solar panel and transmit its data through a BDS (China's BeiDou Navigation Satellite System) terminator for long-term measurements without personnel in site. ATR deployed in Dunhuang test site with ground field about 30-cm-diameter area for multi-spectral reflectance measurements. Other instruments at the site include a Cimel sunphotometer and a diffuser-to-globe irradiance meter for atmosphere observations. The methodology for band-averaged reflectance retrieval and hyperspectral reflectance fitting process are described. Then the hyperspectral reflectance and atmospheric parameters are put into 6s code to predict TOA radiance which compare with MODIS radiance.
Toward designing for trust in database automation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Duez, P. P.; Jamieson, G. A.
Appropriate reliance on system automation is imperative for safe and productive work, especially in safety-critical systems. It is unsafe to rely on automation beyond its designed use; conversely, it can be both unproductive and unsafe to manually perform tasks that are better relegated to automated tools. Operator trust in automated tools mediates reliance, and trust appears to affect how operators use technology. As automated agents become more complex, the question of trust in automation is increasingly important. In order to achieve proper use of automation, we must engender an appropriate degree of trust that is sensitive to changes in operatingmore » functions and context. In this paper, we present research concerning trust in automation in the domain of automated tools for relational databases. Lee and See have provided models of trust in automation. One model developed by Lee and See identifies three key categories of information about the automation that lie along a continuum of attributional abstraction. Purpose-, process-and performance-related information serve, both individually and through inferences between them, to describe automation in such a way as to engender r properly-calibrated trust. Thus, one can look at information from different levels of attributional abstraction as a general requirements analysis for information key to appropriate trust in automation. The model of information necessary to engender appropriate trust in automation [1] is a general one. Although it describes categories of information, it does not provide insight on how to determine the specific information elements required for a given automated tool. We have applied the Abstraction Hierarchy (AH) to this problem in the domain of relational databases. The AH serves as a formal description of the automation at several levels of abstraction, ranging from a very abstract purpose-oriented description to a more concrete description of the resources involved in the automated process. The connection between an AH for an automated tool and a list of information elements at the three levels of attributional abstraction is then direct, providing a method for satisfying information requirements for appropriate trust in automation. In this paper, we will present our method for developing specific information requirements for an automated tool, based on a formal analysis of that tool and the models presented by Lee and See. We will show an example of the application of the AH to automation, in the domain of relational database automation, and the resulting set of specific information elements for appropriate trust in the automated tool. Finally, we will comment on the applicability of this approach to the domain of nuclear plant instrumentation. (authors)« less
Monitoring forest land from high altitude and from space
NASA Technical Reports Server (NTRS)
1971-01-01
Forest inventory, forest stress, and standardization and calibration studies are presented. These include microscale photointerpretation of forest and nonforest land classes, multiseasonal film densities for automated forest and nonforest land classification, trend and spread of bark beetle infestations from 1968 through 1971, aerial photography for determining optimum levels of stand density to reduce such infestations, use of airborne spectrometers and multispectral scanners for previsual detection of Ponderosa pine trees under stress from insects and diseases, establishment of an earth resources technology satellite test site in the Black Hills and the identification of natural resolution targets, detection of root disease impact on forest stands by sequential orbital and suborbital multispectral photography, and calibration of color aerial photography.
Method for in-situ restoration of plantinum resistance thermometer calibration
Carroll, Radford M.
1989-01-01
A method is provided for in-situ restoration of platinum resistance thermometers (PRT's) that have undergone surface oxide contamination and/or strain-related damage causing decalibration. The method, which may be automated using a programmed computer control arrangement, consists of applying a dc heating current to the resistive sensing element of the PRT of sufficient magnitude to heat the element to an annealing temperature and maintaining the temperature for a specified period to restore the element to a stress-free calibration condition. The process anneals the sensing element of the PRT without subjecting the entire PRT assembly to the annealing temperature and may be used in the periodic maintenance of installed PRT's.
Method for in-situ restoration of platinum resistance thermometer calibration
Carroll, R.M.
1987-10-23
A method is provided for in-situ restoration of platinum resistance thermometers (PRT's) that have undergone surface oxide contamination and/or stain-related damage causing decalibration. The method, which may be automated using a programmed computer control arrangement, consists of applying a dc heating current to the resistive sensing element of the PRT of sufficient magnitude to heat the element to an annealing temperature and maintaining the temperature for a specified period to restore the element to a stress-free calibration condition. The process anneals the sensing element of the PRT without subjecting the entire PRT assembly to the annealing temperature and may be used in the periodic maintenance of installed PRT's. 1 fig.
Geometric Calibration and Validation of Ultracam Aerial Sensors
NASA Astrophysics Data System (ADS)
Gruber, Michael; Schachinger, Bernhard; Muick, Marc; Neuner, Christian; Tschemmernegg, Helfried
2016-03-01
We present details of the calibration and validation procedure of UltraCam Aerial Camera systems. Results from the laboratory calibration and from validation flights are presented for both, the large format nadir cameras and the oblique cameras as well. Thus in this contribution we show results from the UltraCam Eagle and the UltraCam Falcon, both nadir mapping cameras, and the UltraCam Osprey, our oblique camera system. This sensor offers a mapping grade nadir component together with the four oblique camera heads. The geometric processing after the flight mission is being covered by the UltraMap software product. Thus we present details about the workflow as well. The first part consists of the initial post-processing which combines image information as well as camera parameters derived from the laboratory calibration. The second part, the traditional automated aerial triangulation (AAT) is the step from single images to blocks and enables an additional optimization process. We also present some special features of our software, which are designed to better support the operator to analyze large blocks of aerial images and to judge the quality of the photogrammetric set-up.
The HST/WFC3 Quicklook Project: A User Interface to Hubble Space Telescope Wide Field Camera 3 Data
NASA Astrophysics Data System (ADS)
Bourque, Matthew; Bajaj, Varun; Bowers, Ariel; Dulude, Michael; Durbin, Meredith; Gosmeyer, Catherine; Gunning, Heather; Khandrika, Harish; Martlin, Catherine; Sunnquist, Ben; Viana, Alex
2017-06-01
The Hubble Space Telescope's Wide Field Camera 3 (WFC3) instrument, comprised of two detectors, UVIS (Ultraviolet-Visible) and IR (Infrared), has been acquiring ~ 50-100 images daily since its installation in 2009. The WFC3 Quicklook project provides a means for instrument analysts to store, calibrate, monitor, and interact with these data through the various Quicklook systems: (1) a ~ 175 TB filesystem, which stores the entire WFC3 archive on disk, (2) a MySQL database, which stores image header data, (3) a Python-based automation platform, which currently executes 22 unique calibration/monitoring scripts, (4) a Python-based code library, which provides system functionality such as logging, downloading tools, database connection objects, and filesystem management, and (5) a Python/Flask-based web interface to the Quicklook system. The Quicklook project has enabled large-scale WFC3 analyses and calibrations, such as the monitoring of the health and stability of the WFC3 instrument, the measurement of ~ 20 million WFC3/UVIS Point Spread Functions (PSFs), the creation of WFC3/IR persistence calibration products, and many others.
Schulze, H Georg; Turner, Robin F B
2015-06-01
High-throughput information extraction from large numbers of Raman spectra is becoming an increasingly taxing problem due to the proliferation of new applications enabled using advances in instrumentation. Fortunately, in many of these applications, the entire process can be automated, yielding reproducibly good results with significant time and cost savings. Information extraction consists of two stages, preprocessing and analysis. We focus here on the preprocessing stage, which typically involves several steps, such as calibration, background subtraction, baseline flattening, artifact removal, smoothing, and so on, before the resulting spectra can be further analyzed. Because the results of some of these steps can affect the performance of subsequent ones, attention must be given to the sequencing of steps, the compatibility of these sequences, and the propensity of each step to generate spectral distortions. We outline here important considerations to effect full automation of Raman spectral preprocessing: what is considered full automation; putative general principles to effect full automation; the proper sequencing of processing and analysis steps; conflicts and circularities arising from sequencing; and the need for, and approaches to, preprocessing quality control. These considerations are discussed and illustrated with biological and biomedical examples reflecting both successful and faulty preprocessing.
Why Bother to Calibrate? Model Consistency and the Value of Prior Information
NASA Astrophysics Data System (ADS)
Hrachowitz, Markus; Fovet, Ophelie; Ruiz, Laurent; Euser, Tanja; Gharari, Shervan; Nijzink, Remko; Savenije, Hubert; Gascuel-Odoux, Chantal
2015-04-01
Hydrological models frequently suffer from limited predictive power despite adequate calibration performances. This can indicate insufficient representations of the underlying processes. Thus ways are sought to increase model consistency while satisfying the contrasting priorities of increased model complexity and limited equifinality. In this study the value of a systematic use of hydrological signatures and expert knowledge for increasing model consistency was tested. It was found that a simple conceptual model, constrained by 4 calibration objective functions, was able to adequately reproduce the hydrograph in the calibration period. The model, however, could not reproduce 20 hydrological signatures, indicating a lack of model consistency. Subsequently, testing 11 models, model complexity was increased in a stepwise way and counter-balanced by using prior information about the system to impose "prior constraints", inferred from expert knowledge and to ensure a model which behaves well with respect to the modeller's perception of the system. We showed that, in spite of unchanged calibration performance, the most complex model set-up exhibited increased performance in the independent test period and skill to reproduce all 20 signatures, indicating a better system representation. The results suggest that a model may be inadequate despite good performance with respect to multiple calibration objectives and that increasing model complexity, if efficiently counter-balanced by available prior constraints, can increase predictive performance of a model and its skill to reproduce hydrological signatures. The results strongly illustrate the need to balance automated model calibration with a more expert-knowledge driven strategy of constraining models.
Why Bother and Calibrate? Model Consistency and the Value of Prior Information.
NASA Astrophysics Data System (ADS)
Hrachowitz, M.; Fovet, O.; Ruiz, L.; Euser, T.; Gharari, S.; Nijzink, R.; Freer, J. E.; Savenije, H.; Gascuel-Odoux, C.
2014-12-01
Hydrological models frequently suffer from limited predictive power despite adequate calibration performances. This can indicate insufficient representations of the underlying processes. Thus ways are sought to increase model consistency while satisfying the contrasting priorities of increased model complexity and limited equifinality. In this study the value of a systematic use of hydrological signatures and expert knowledge for increasing model consistency was tested. It was found that a simple conceptual model, constrained by 4 calibration objective functions, was able to adequately reproduce the hydrograph in the calibration period. The model, however, could not reproduce 20 hydrological signatures, indicating a lack of model consistency. Subsequently, testing 11 models, model complexity was increased in a stepwise way and counter-balanced by using prior information about the system to impose "prior constraints", inferred from expert knowledge and to ensure a model which behaves well with respect to the modeller's perception of the system. We showed that, in spite of unchanged calibration performance, the most complex model set-up exhibited increased performance in the independent test period and skill to reproduce all 20 signatures, indicating a better system representation. The results suggest that a model may be inadequate despite good performance with respect to multiple calibration objectives and that increasing model complexity, if efficiently counter-balanced by available prior constraints, can increase predictive performance of a model and its skill to reproduce hydrological signatures. The results strongly illustrate the need to balance automated model calibration with a more expert-knowledge driven strategy of constraining models.
Marinova, Mariela; Artusi, Carlo; Brugnolo, Laura; Antonelli, Giorgia; Zaninotto, Martina; Plebani, Mario
2013-11-01
Although, due to its high specificity and sensitivity, LC-MS/MS is an efficient technique for the routine determination of immunosuppressants in whole blood, it involves time-consuming manual sample preparation. The aim of the present study was therefore to develop an automated sample-preparation protocol for the quantification of sirolimus, everolimus and tacrolimus by LC-MS/MS using a liquid handling platform. Six-level commercially available blood calibrators were used for assay development, while four quality control materials and three blood samples from patients under immunosuppressant treatment were employed for the evaluation of imprecision. Barcode reading, sample re-suspension, transfer of whole blood samples into 96-well plates, addition of internal standard solution, mixing, and protein precipitation were performed with a liquid handling platform. After plate filtration, the deproteinised supernatants were submitted for SPE on-line. The only manual steps in the entire process were de-capping of the tubes, and transfer of the well plates to the HPLC autosampler. Calibration curves were linear throughout the selected ranges. The imprecision and accuracy data for all analytes were highly satisfactory. The agreement between the results obtained with manual and those obtained with automated sample preparation was optimal (n=390, r=0.96). In daily routine (100 patient samples) the typical overall total turnaround time was less than 6h. Our findings indicate that the proposed analytical system is suitable for routine analysis, since it is straightforward and precise. Furthermore, it incurs less manual workload and less risk of error in the quantification of whole blood immunosuppressant concentrations than conventional methods. © 2013.
Root zone water quality model (RZWQM2): Model use, calibration and validation
Ma, Liwang; Ahuja, Lajpat; Nolan, B.T.; Malone, Robert; Trout, Thomas; Qi, Z.
2012-01-01
The Root Zone Water Quality Model (RZWQM2) has been used widely for simulating agricultural management effects on crop production and soil and water quality. Although it is a one-dimensional model, it has many desirable features for the modeling community. This article outlines the principles of calibrating the model component by component with one or more datasets and validating the model with independent datasets. Users should consult the RZWQM2 user manual distributed along with the model and a more detailed protocol on how to calibrate RZWQM2 provided in a book chapter. Two case studies (or examples) are included in this article. One is from an irrigated maize study in Colorado to illustrate the use of field and laboratory measured soil hydraulic properties on simulated soil water and crop production. It also demonstrates the interaction between soil and plant parameters in simulated plant responses to water stresses. The other is from a maize-soybean rotation study in Iowa to show a manual calibration of the model for crop yield, soil water, and N leaching in tile-drained soils. Although the commonly used trial-and-error calibration method works well for experienced users, as shown in the second example, an automated calibration procedure is more objective, as shown in the first example. Furthermore, the incorporation of the Parameter Estimation Software (PEST) into RZWQM2 made the calibration of the model more efficient than a grid (ordered) search of model parameters. In addition, PEST provides sensitivity and uncertainty analyses that should help users in selecting the right parameters to calibrate.
Data processing pipeline for Herschel HIFI
NASA Astrophysics Data System (ADS)
Shipman, R. F.; Beaulieu, S. F.; Teyssier, D.; Morris, P.; Rengel, M.; McCoey, C.; Edwards, K.; Kester, D.; Lorenzani, A.; Coeur-Joly, O.; Melchior, M.; Xie, J.; Sanchez, E.; Zaal, P.; Avruch, I.; Borys, C.; Braine, J.; Comito, C.; Delforge, B.; Herpin, F.; Hoac, A.; Kwon, W.; Lord, S. D.; Marston, A.; Mueller, M.; Olberg, M.; Ossenkopf, V.; Puga, E.; Akyilmaz-Yabaci, M.
2017-12-01
Context. The HIFI instrument on the Herschel Space Observatory performed over 9100 astronomical observations, almost 900 of which were calibration observations in the course of the nearly four-year Herschel mission. The data from each observation had to be converted from raw telemetry into calibrated products and were included in the Herschel Science Archive. Aims: The HIFI pipeline was designed to provide robust conversion from raw telemetry into calibrated data throughout all phases of the HIFI missions. Pre-launch laboratory testing was supported as were routine mission operations. Methods: A modular software design allowed components to be easily added, removed, amended and/or extended as the understanding of the HIFI data developed during and after mission operations. Results: The HIFI pipeline processed data from all HIFI observing modes within the Herschel automated processing environment as well as within an interactive environment. The same software can be used by the general astronomical community to reprocess any standard HIFI observation. The pipeline also recorded the consistency of processing results and provided automated quality reports. Many pipeline modules were in use since the HIFI pre-launch instrument level testing. Conclusions: Processing in steps facilitated data analysis to discover and address instrument artefacts and uncertainties. The availability of the same pipeline components from pre-launch throughout the mission made for well-understood, tested, and stable processing. A smooth transition from one phase to the next significantly enhanced processing reliability and robustness. Herschel was an ESA space observatory with science instruments provided by European-led Principal Investigator consortia and with important participation from NASA.
Sarkar, Sandip; Burriesci, Gaetano; Wojcik, Adam; Aresti, Nicholas; Hamilton, George; Seifalian, Alexander M
2009-04-16
Long-term patency of expanded polytetrafluoroethylene (ePTFE) small calibre cardiovascular bypass prostheses (<6mm) is poor because of thrombosis and intimal hyperplasia due to low compliance, stimulating the search for elastic alternatives. Wall porosity allows effective post-implantation graft healing, encouraging endothelialisation and a measured fibrovascular response. We have developed a novel poly (carbonate) urethane-based nanocomposite polymer incorporating polyhedral oligomeric silsesquioxane (POSS) nanocages (UCL-NANO) which shows anti-thrombogenicity and biostability. We report an extrusion-phase-inversion technique for manufacturing uniform-walled porous conduits using UCL-NANO. Image analysis-aided wall measurement showed that two uniform wall-thicknesses could be specified. Different coagulant conditions revealed the importance of low-temperature phase-inversion for graft integrity. Although minor reduction of pore-size variation resulted from the addition of ethanol or N,N-dimethylacetamide, high concentrations of ethanol as coagulant did not provide uniform porosity throughout the wall. Tensile testing showed the grafts to be elastic with strength being directly proportional to weight. The ultimate strengths achieved were above those expected from haemodynamic conditions, with anisotropy due to the manufacturing process. Elemental analysis by energy-dispersive X-ray analysis did not show a regional variation of POSS on the lumen or outer surface. In conclusion, the automated vertical extrusion-phase-inversion device can reproducibly fabricate uniform-walled small calibre conduits from UCL-NANO. These elastic microporous grafts demonstrate favourable mechanical integrity for haemodynamic exposure and are currently undergoing in-vivo evaluation of durability and healing properties.
Apparatus Tests Thermocouples For Seebeck Inhomogeneity
NASA Technical Reports Server (NTRS)
Burkett, Cecil G., Jr.; Bauserman, Willard A., Jr.; West, James W.
1995-01-01
Automated apparatus reveals sources of error not revealed in calibration. Computer-controlled apparatus detects and measures Seebeck inhomogeneities in sheathed thermocouples. Measures thermocouple output voltage as function of position of probe along sharp gradient of temperature. Abnormal variations in voltage-versus-position data indicative of Seebeck inhomogeneities. Prototype for development of standard method and equipment for routine acceptance/rejection testing of sheathed thermocouples in industrial and research laboratories.
NASA Astrophysics Data System (ADS)
Peterson, Karl
Since the discovery in the late 1930s that air entrainment can improve the durability of concrete, it has been important for people to know the quantity, spacial distribution, and size distribution of the air-voids in their concrete mixes in order to ensure a durable final product. The task of air-void system characterization has fallen on the microscopist, who, according to a standard test method laid forth by the American Society of Testing and Materials, must meticulously count or measure about a thousand air-voids per sample as exposed on a cut and polished cross-section of concrete. The equipment used to perform this task has traditionally included a stereomicroscope, a mechanical stage, and a tally counter. Over the past 30 years, with the availability of computers and digital imaging, automated methods have been introduced to perform the same task, but using the same basic equipment. The method described here replaces the microscope and mechanical stage with an ordinary flatbed desktop scanner, and replaces the microscopist and tally counter with a personal computer; two pieces of equipment much more readily available than a microscope with a mechanical stage, and certainly easier to find than a person willing to sit for extended periods of time counting air-voids. Most laboratories that perform air-void system characterization typically have cabinets full of prepared samples with corresponding results from manual operators. Proponents of automated methods often take advantage of this fact by analyzing the same samples and comparing the results. A similar iterative approach is described here where scanned images collected from a significant number of samples are analyzed, the results compared to those of the manual operator, and the settings optimized to best approximate the results of the manual operator. The results of this calibration procedure are compared to an alternative calibration procedure based on the more rigorous digital image accuracy assessment methods employed primarily by the remote sensing/satellite imaging community.
Calibration of the motor-assisted robotic stereotaxy system: MARS.
Heinig, Maximilian; Hofmann, Ulrich G; Schlaefer, Alexander
2012-11-01
The motor-assisted robotic stereotaxy system presents a compact and light-weight robotic system for stereotactic neurosurgery. Our system is designed to position probes in the human brain for various applications, for example, deep brain stimulation. It features five fully automated axes. High positioning accuracy is of utmost importance in robotic neurosurgery. First, the key parameters of the robot's kinematics are determined using an optical tracking system. Next, the positioning errors at the center of the arc--which is equivalent to the target position in stereotactic interventions--are investigated using a set of perpendicular cameras. A modeless robot calibration method is introduced and evaluated. To conclude, the application accuracy of the robot is studied in a phantom trial. We identified the bending of the arc under load as the robot's main error source. A calibration algorithm was implemented to compensate for the deflection of the robot's arc. The mean error after the calibration was 0.26 mm, the 68.27th percentile was 0.32 mm, and the 95.45th was 0.50 mm. The kinematic properties of the robot were measured, and based on the results an appropriate calibration method was derived. With mean errors smaller than currently used mechanical systems, our results show that the robot's accuracy is appropriate for stereotactic interventions.
Development and operation of a high-throughput accurate-wavelength lens-based spectrometer a)
Bell, Ronald E.
2014-07-11
A high-throughput spectrometer for the 400-820 nm wavelength range has been developed for charge exchange recombination spectroscopy or general spectroscopy. A large 2160 mm -1 grating is matched with fast f /1.8 200 mm lenses, which provide stigmatic imaging. A precision optical encoder measures the grating angle with an accuracy ≤ 0.075 arc seconds. A high quantum efficiency low-etaloning CCD detector allows operation at longer wavelengths. A patch panel allows input fibers to interface with interchangeable fiber holders that attach to a kinematic mount behind the entrance slit. The computer-controlled hardware allows automated control of wavelength, timing, f-number, automated datamore » collection, and wavelength calibration.« less
Total ozone observation by sun photometry at Arosa, Switzerland
NASA Astrophysics Data System (ADS)
Staehelin, Johannes; Schill, Herbert; Hoegger, Bruno; Viatte, Pierre; Levrat, Gilbert; Gamma, Adrian
1995-07-01
The method used for ground-based total ozone observations and the design of two instruments used to monitor atmospheric total ozone at Arosa (Dobson spectrophotometer and Brewer spectrometer) are briefly described. Two different procedures of the calibration of the Dobson spectrometer, both based on the Langley plot method, are presented. Data quality problems that occured in recent years in the measurements of one Dobson instrument at Arosa are discussed, and two different methods to reassess total ozone observations are compared. Two partially automated Dobson spectrophotometers and two completely automated Brewer spectrometers are currently in operation at Arosa. Careful comparison of the results of the measurements of the different instruments yields valuable information of possible small long- term drifts of the instruments involved in the operational measurements.
On-orbit characterization of hyperspectral imagers
NASA Astrophysics Data System (ADS)
McCorkel, Joel
Remote Sensing Group (RSG) at the University of Arizona has a long history of using ground-based test sites for the calibration of airborne- and satellite-based sensors. Often, ground-truth measurements at these tests sites are not always successful due to weather and funding availability. Therefore, RSG has also employed automated ground instrument approaches and cross-calibration methods to verify the radiometric calibration of a sensor. The goal in the cross-calibration method is to transfer the calibration of a well-known sensor to that of a different sensor. This dissertation presents a method for determining the radiometric calibration of a hyperspectral imager using multispectral imagery. The work relies on a multispectral sensor, Moderate-resolution Imaging Spectroradiometer (MODIS), as a reference for the hyperspectral sensor Hyperion. Test sites used for comparisons are Railroad Valley in Nevada and a portion of the Libyan Desert in North Africa. A method to predict hyperspectral surface reflectance using a combination of MODIS data and spectral shape information is developed and applied for the characterization of Hyperion. Spectral shape information is based on RSG's historical in situ data for the Railroad Valley test site and spectral library data for the Libyan test site. Average atmospheric parameters, also based on historical measurements, are used in reflectance prediction and transfer to space. Results of several cross-calibration scenarios that differ in image acquisition coincidence, test site, and reference sensor are found for the characterization of Hyperion. These are compared with results from the reflectance-based approach of vicarious calibration, a well-documented method developed by the RSG that serves as a baseline for calibration performance for the cross-calibration method developed here. Cross-calibration provides results that are within 2% of those of reflectance-based results in most spectral regions. Larger disagreements exist for shorter wavelengths studied in this work as well as in spectral areas that experience absorption by the atmosphere.
Somnam, Sarawut; Jakmunee, Jaroon; Grudpan, Kate; Lenghor, Narong; Motomizu, Shoji
2008-12-01
An automated hydrodynamic sequential injection (HSI) system with spectrophotometric detection was developed. Thanks to the hydrodynamic injection principle, simple devices can be used for introducing reproducible microliter volumes of both sample and reagent into the flow channel to form stacked zones in a similar fashion to those in a sequential injection system. The zones were then pushed to the detector and a peak profile was recorded. The determination of nitrite and nitrate in water samples by employing the Griess reaction was chosen as a model. Calibration graphs with linearity in the range of 0.7 - 40 muM were obtained for both nitrite and nitrate. Detection limits were found to be 0.3 muM NO(2)(-) and 0.4 muM NO(3)(-), respectively, with a sample throughput of 20 h(-1) for consecutive determination of both the species. The developed system was successfully applied to the analysis of water samples, employing simple and cost-effective instrumentation and offering higher degrees of automation and low chemical consumption.
Garteiser, Philippe; Doblas, Sabrina; Towner, Rheal A; Griffin, Timothy M
2013-11-01
To use an automated water-suppressed magnetic resonance imaging (MRI) method to objectively assess adipose tissue (AT) volumes in whole body and specific regional body components (subcutaneous, thoracic and peritoneal) of obese and lean mice. Water-suppressed MR images were obtained on a 7T, horizontal-bore MRI system in whole bodies (excluding head) of 26 week old male C57BL6J mice fed a control (10% kcal fat) or high-fat diet (60% kcal fat) for 20 weeks. Manual (outlined regions) versus automated (Gaussian fitting applied to threshold-weighted images) segmentation procedures were compared for whole body AT and regional AT volumes (i.e., subcutaneous, thoracic, and peritoneal). The AT automated segmentation method was compared to dual-energy X-ray (DXA) analysis. The average AT volumes for whole body and individual compartments correlated well between the manual outlining and the automated methods (R2>0.77, p<0.05). Subcutaneous, peritoneal, and total body AT volumes were increased 2-3 fold and thoracic AT volume increased more than 5-fold in diet-induced obese mice versus controls (p<0.05). MRI and DXA-based method comparisons were highly correlative (R2=0.94, p<0.0001). Automated AT segmentation of water-suppressed MRI data using a global Gaussian filtering algorithm resulted in a fairly accurate assessment of total and regional AT volumes in a pre-clinical mouse model of obesity. © 2013 Elsevier Inc. All rights reserved.
Maximizing the Science Output of GOES-R SUVI during Operations
NASA Astrophysics Data System (ADS)
Shaw, M.; Vasudevan, G.; Mathur, D. P.; Mansir, D.; Shing, L.; Edwards, C. G.; Seaton, D. B.; Darnel, J.; Nwachuku, C.
2017-12-01
Regular manual calibrations are an often-unavoidable demand on ground operations personnel during long-term missions. This paper describes a set of features built into the instrument control software and the techniques employed by the Solar Ultraviolet Imager (SUVI) team to automate a large fraction of regular on-board calibration activities, allowing SUVI to be operated with little manual commanding from the ground and little interruption to nominal sequencing. SUVI is a Generalized Cassegrain telescope with a large field of view that images the Sun in six extreme ultraviolet (EUV) narrow bandpasses centered at 9.4, 13.1, 17.1, 19.5, 28.4 and 30.4 nm. It is part of the payload of the Geostationary Operational Environmental Satellite (GOES-R) mission.
Vanishing Point Extraction and Refinement for Robust Camera Calibration
Tsai, Fuan
2017-01-01
This paper describes a flexible camera calibration method using refined vanishing points without prior information. Vanishing points are estimated from human-made features like parallel lines and repeated patterns. With the vanishing points extracted from the three mutually orthogonal directions, the interior and exterior orientation parameters can be further calculated using collinearity condition equations. A vanishing point refinement process is proposed to reduce the uncertainty caused by vanishing point localization errors. The fine-tuning algorithm is based on the divergence of grouped feature points projected onto the reference plane, minimizing the standard deviation of each of the grouped collinear points with an O(1) computational complexity. This paper also presents an automated vanishing point estimation approach based on the cascade Hough transform. The experiment results indicate that the vanishing point refinement process can significantly improve camera calibration parameters and the root mean square error (RMSE) of the constructed 3D model can be reduced by about 30%. PMID:29280966
Control Program for an Optical-Calibration Robot
NASA Technical Reports Server (NTRS)
Johnston, Albert
2005-01-01
A computer program provides semiautomatic control of a moveable robot used to perform optical calibration of video-camera-based optoelectronic sensor systems that will be used to guide automated rendezvous maneuvers of spacecraft. The function of the robot is to move a target and hold it at specified positions. With the help of limit switches, the software first centers or finds the target. Then the target is moved to a starting position. Thereafter, with the help of an intuitive graphical user interface, an operator types in coordinates of specified positions, and the software responds by commanding the robot to move the target to the positions. The software has capabilities for correcting errors and for recording data from the guidance-sensor system being calibrated. The software can also command that the target be moved in a predetermined sequence of motions between specified positions and can be run in an advanced control mode in which, among other things, the target can be moved beyond the limits set by the limit switches.
Wolfs, Vincent; Villazon, Mauricio Florencio; Willems, Patrick
2013-01-01
Applications such as real-time control, uncertainty analysis and optimization require an extensive number of model iterations. Full hydrodynamic sewer models are not sufficient for these applications due to the excessive computation time. Simplifications are consequently required. A lumped conceptual modelling approach results in a much faster calculation. The process of identifying and calibrating the conceptual model structure could, however, be time-consuming. Moreover, many conceptual models lack accuracy, or do not account for backwater effects. To overcome these problems, a modelling methodology was developed which is suited for semi-automatic calibration. The methodology is tested for the sewer system of the city of Geel in the Grote Nete river basin in Belgium, using both synthetic design storm events and long time series of rainfall input. A MATLAB/Simulink(®) tool was developed to guide the modeller through the step-wise model construction, reducing significantly the time required for the conceptual modelling process.
NASA Astrophysics Data System (ADS)
Czapla-Myers, Jeffrey; McCorkel, Joel; Anderson, Nikolaus; Biggar, Stuart
2018-01-01
This paper describes the current ground-based calibration results of Landsat 7 Enhanced Thematic Mapper Plus (ETM+), Landsat 8 Operational Land Imager (OLI), Terra and Aqua Moderate Resolution Imaging Spectroradiometer (MODIS), Suomi National Polar orbiting Partnership Visible Infrared Imaging Radiometer Suite (VIIRS), and Sentinel-2A Multispectral Instrument (MSI), using an automated suite of instruments located at Railroad Valley, Nevada, USA. The period of this study is 2012 to 2016 for MODIS, VIIRS, and ETM+, 2013 to 2016 for OLI, and 2015 to 2016 for MSI. The current results show that all sensors agree with the Radiometric Calibration Test Site (RadCaTS) to within ±5% in the solar-reflective regime, except for one band on VIIRS that is within ±6%. In the case of ETM+ and OLI, the agreement is within ±3%, and, in the case of MODIS, the agreement is within ±3.5%. MSI agrees with RadCaTS to within ±4.5% in all applicable bands.
Unexpected bias in NIST 4πγ ionization chamber measurements.
Unterweger, M P; Fitzgerald, R
2012-09-01
In January of 2010, it was discovered that the source holder used for calibrations in the NIST 4πγ ionization chamber (IC) has not been stable. The positioning ring that determines the height of the sample in the reentrant tube of the IC has slowly shifted during 35 years of use. This has led to a slow change in the calibration factors for the various radionuclides measured by this instrument. The changes are dependent on γ-ray energy and the time the IC was calibrated for a given radionuclide. A review of the historic data with regard to when the calibrations were done has enabled us to approximate the magnitude of the changes with time. This requires a number of assumptions, and corresponding uncertainty components, including whether the changes in height were gradual or in steps as will be shown in drawings of sample holder. For calibrations the changes in calibration factors have been most significant for low energy gamma emitters such as (133)Xe, (241)Am, (125)I and (85)Kr. The corrections to previous calibrations can be approximated and the results corrected with an increase in the overall uncertainty. At present we are recalibrating the IC based on new primary measurements of the radionuclides measured on the IC. Likewise we have been calibrating a new automated ionization-chamber system. A bigger problem is the significant number of half-life results NIST has published over the last 35 years that are based on IC measurements. The effect on half-life is largest for long-lived radionuclei, especially low-energy γ-ray emitters. This presentation will review our results and recommend changes in values and/or uncertainties. Any recommendation for withdrawal of any results will also be undertaken. Published by Elsevier Ltd.
NASA Astrophysics Data System (ADS)
Norton, P. A., II
2015-12-01
The U. S. Geological Survey is developing a National Hydrologic Model (NHM) to support consistent hydrologic modeling across the conterminous United States (CONUS). The Precipitation-Runoff Modeling System (PRMS) simulates daily hydrologic and energy processes in watersheds, and is used for the NHM application. For PRMS each watershed is divided into hydrologic response units (HRUs); by default each HRU is assumed to have a uniform hydrologic response. The Geospatial Fabric (GF) is a database containing initial parameter values for input to PRMS and was created for the NHM. The parameter values in the GF were derived from datasets that characterize the physical features of the entire CONUS. The NHM application is composed of more than 100,000 HRUs from the GF. Selected parameter values commonly are adjusted by basin in PRMS using an automated calibration process based on calibration targets, such as streamflow. Providing each HRU with distinct values that captures variability within the CONUS may improve simulation performance of the NHM. During calibration of the NHM by HRU, selected parameter values are adjusted for PRMS based on calibration targets, such as streamflow, snow water equivalent (SWE) and actual evapotranspiration (AET). Simulated SWE, AET, and runoff were compared to value ranges derived from multiple sources (e.g. the Snow Data Assimilation System, the Moderate Resolution Imaging Spectroradiometer (i.e. MODIS) Global Evapotranspiration Project, the Simplified Surface Energy Balance model, and the Monthly Water Balance Model). This provides each HRU with a distinct set of parameter values that captures the variability within the CONUS, leading to improved model performance. We present simulation results from the NHM after preliminary calibration, including the results of basin-level calibration for the NHM using: 1) default initial GF parameter values, and 2) parameter values calibrated by HRU.
Hunt, R.J.; Feinstein, D.T.; Pint, C.D.; Anderson, M.P.
2006-01-01
As part of the USGS Water, Energy, and Biogeochemical Budgets project and the NSF Long-Term Ecological Research work, a parameter estimation code was used to calibrate a deterministic groundwater flow model of the Trout Lake Basin in northern Wisconsin. Observations included traditional calibration targets (head, lake stage, and baseflow observations) as well as unconventional targets such as groundwater flows to and from lakes, depth of a lake water plume, and time of travel. The unconventional data types were important for parameter estimation convergence and allowed the development of a more detailed parameterization capable of resolving model objectives with well-constrained parameter values. Independent estimates of groundwater inflow to lakes were most important for constraining lakebed leakance and the depth of the lake water plume was important for determining hydraulic conductivity and conceptual aquifer layering. The most important target overall, however, was a conventional regional baseflow target that led to correct distribution of flow between sub-basins and the regional system during model calibration. The use of an automated parameter estimation code: (1) facilitated the calibration process by providing a quantitative assessment of the model's ability to match disparate observed data types; and (2) allowed assessment of the influence of observed targets on the calibration process. The model calibration required the use of a 'universal' parameter estimation code in order to include all types of observations in the objective function. The methods described in this paper help address issues of watershed complexity and non-uniqueness common to deterministic watershed models. ?? 2005 Elsevier B.V. All rights reserved.
1998-02-12
HAARP ). 14. SUBJECT TERMS Global Positioning System (GPS), High Frequency Active Auroral Research Program ( HAARP ), ionosphere, radiowave...Scintillation Simulation 23 4.10 Automated Calibrations 23 5. HAARP Activities 24 5.1 Development of HAARP Diagnostics 24 5.2 Facilitation of... HAARP Operations and Broader Scientific Collaborations 27 5.3 Public Relations 28 6. Publications 30 References 30 Acronyms and Initials 30 Appendix
Evolution of solid rocket booster component testing
NASA Technical Reports Server (NTRS)
Lessey, Joseph A.
1989-01-01
The evolution of one of the new generation of test sets developed for the Solid Rocket Booster of the U.S. Space Transportation System. Requirements leading to factory checkout of the test set are explained, including the evolution from manual to semiautomated toward fully automated status. Individual improvements in the built-in test equipment, self-calibration, and software flexibility are addressed, and the insertion of fault detection to improve reliability is discussed.
Chromatic aberration correction: an enhancement to the calibration of low-cost digital dermoscopes.
Wighton, Paul; Lee, Tim K; Lui, Harvey; McLean, David; Atkins, M Stella
2011-08-01
We present a method for calibrating low-cost digital dermoscopes that corrects for color and inconsistent lighting and also corrects for chromatic aberration. Chromatic aberration is a form of radial distortion that often occurs in inexpensive digital dermoscopes and creates red and blue halo-like effects on edges. Being radial in nature, distortions due to chromatic aberration are not constant across the image, but rather vary in both magnitude and direction. As a result, distortions are not only visually distracting but could also mislead automated characterization techniques. Two low-cost dermoscopes, based on different consumer-grade cameras, were tested. Color is corrected by imaging a reference and applying singular value decomposition to determine the transformation required to ensure accurate color reproduction. Lighting is corrected by imaging a uniform surface and creating lighting correction maps. Chromatic aberration is corrected using a second-order radial distortion model. Our results for color and lighting calibration are consistent with previously published results, while distortions due to chromatic aberration can be reduced by 42-47% in the two systems considered. The disadvantages of inexpensive dermoscopy can be quickly substantially mitigated with a suitable calibration procedure. © 2011 John Wiley & Sons A/S.
Experiences in Automated Calibration of a Nickel Equation of State
NASA Astrophysics Data System (ADS)
Carpenter, John H.
2017-06-01
Wide availability of large computers has led to increasing incorporation of computational data, such as from density functional theory molecular dynamics, in the development of equation of state (EOS) models. Once a grid of computational data is available, it is usually left to an expert modeler to model the EOS using traditional techniques. One can envision the possibility of using the increasing computing resources to perform black-box calibration of EOS models, with the goal of reducing the workload on the modeler or enabling non-experts to generate good EOSs with such a tool. Progress towards building such a black-box calibration tool will be explored in the context of developing a new, wide-range EOS for nickel. While some details of the model and data will be shared, the focus will be on what was learned by automatically calibrating the model in a black-box method. Model choices and ensuring physicality will also be discussed. Sandia National Laboratories is a multi-mission laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
NASA Technical Reports Server (NTRS)
Tawfik, Hazem
1991-01-01
A relatively simple, inexpensive, and generic technique that could be used in both laboratories and some operation site environments is introduced at the Robotics Applications and Development Laboratory (RADL) at Kennedy Space Center (KSC). In addition, this report gives a detailed explanation of the set up procedure, data collection, and analysis using this new technique that was developed at the State University of New York at Farmingdale. The technique was used to evaluate the repeatability, accuracy, and overshoot of the Unimate Industrial Robot, PUMA 500. The data were statistically analyzed to provide an insight into the performance of the systems and components of the robot. Also, the same technique was used to check the forward kinematics against the inverse kinematics of RADL's PUMA robot. Recommendations were made for RADL to use this technique for laboratory calibration of the currently existing robots such as the ASEA, high speed controller, Automated Radiator Inspection Device (ARID) etc. Also, recommendations were made to develop and establish other calibration techniques that will be more suitable for site calibration environment and robot certification.
NASA Technical Reports Server (NTRS)
Hooker, Stanford B. (Editor); Firestone, Elaine R. (Editor); Johnson, B. Carol; Yoon, Howard W.; Bruce, Sally S.; Shaw, Ping-Shine; Thompson, Ambler; Hooker, Stanford B.; Barnes, Robert A.; Eplee, Robert E., Jr.;
1999-01-01
This report documents the fifth Sea-viewing Wide Field-of-view Sensor (SeaWiFS) Intercalibration Round-Robin Experiment (SIRREX-5), which was held at the National Institute of Standards and Technology (NIST) on 23-30 July 1996. The agenda for SIRREX-5 was established based on recommendations made during SIRREX-4. For the first time in a SIRREX activity, instrument intercomparisons were performed at field sites, which were near NIST. The goals of SIRREX-5 were to continue the emphasis on training and the implementation of standard measurement practices, investigate the calibration methods and measurement chains in use by the oceanographic community, provide opportunities for discussion, and intercompare selected instruments. As at SIRREX-4, the day was divided between morning lectures and afternoon laboratory exercises. A set of core laboratory sessions were performed: 1) in-water radiant flux measurements; 2) in-air radiant flux measurements; 3) spectral radiance responsivity measurements using the plaque method; 4) device calibration or stability monitoring with portable field sources; and 5) various ancillary exercises designed to illustrate radiometric concepts. Before, during, and after SIRREX-5, NIST calibrated the SIRREX-5 participating radiometers for radiance and irradiance responsivity. The Facility for Automated Spectroradiometric Calibrations (FASCAL) was scheduled for spectral irradiance calibrations for standard lamps during SIRREX-5. Three lamps from the SeaWiFS community were submitted and two were calibrated.
NASA Astrophysics Data System (ADS)
Hrachowitz, M.; Fovet, O.; Ruiz, L.; Euser, T.; Gharari, S.; Nijzink, R.; Freer, J.; Savenije, H. H. G.; Gascuel-Odoux, C.
2014-09-01
Hydrological models frequently suffer from limited predictive power despite adequate calibration performances. This can indicate insufficient representations of the underlying processes. Thus, ways are sought to increase model consistency while satisfying the contrasting priorities of increased model complexity and limited equifinality. In this study, the value of a systematic use of hydrological signatures and expert knowledge for increasing model consistency was tested. It was found that a simple conceptual model, constrained by four calibration objective functions, was able to adequately reproduce the hydrograph in the calibration period. The model, however, could not reproduce a suite of hydrological signatures, indicating a lack of model consistency. Subsequently, testing 11 models, model complexity was increased in a stepwise way and counter-balanced by "prior constraints," inferred from expert knowledge to ensure a model which behaves well with respect to the modeler's perception of the system. We showed that, in spite of unchanged calibration performance, the most complex model setup exhibited increased performance in the independent test period and skill to better reproduce all tested signatures, indicating a better system representation. The results suggest that a model may be inadequate despite good performance with respect to multiple calibration objectives and that increasing model complexity, if counter-balanced by prior constraints, can significantly increase predictive performance of a model and its skill to reproduce hydrological signatures. The results strongly illustrate the need to balance automated model calibration with a more expert-knowledge-driven strategy of constraining models.
Reprocessing VIIRS sensor data records from the early SNPP mission
NASA Astrophysics Data System (ADS)
Blonski, Slawomir; Cao, Changyong
2016-10-01
The Visible-Infrared Imaging Radiometer Suite (VIIRS) instrument onboard the Suomi National Polar-orbiting Partnership (SNPP) satellite began acquiring Earth observations in November 2011. VIIRS data from all spectral bands became available three months after launch when all infrared-band detectors were cooled down to operational temperature. Before that, VIIRS sensor data record (SDR) products were successfully generated for the visible and near infrared (VNIR) bands. Although VIIRS calibration has been significantly improved through the four years of the SNPP mission, SDR reprocessing for this early mission phase has yet to be performed. Despite a rapid decrease in the telescope throughput that occurred during the first few months on orbit, calibration coefficients for the VNIR bands were recently successfully generated using an automated procedure that is currently deployed in the operational SDR production system. The reanalyzed coefficients were derived from measurements collected during solar calibration events that occur on every SNPP orbit since the beginning of the mission. The new coefficients can be further used to reprocess the VIIRS SDR products. In this study, they are applied to reprocess VIIRS data acquired over pseudo-invariant calibration sites Libya 4 and Sudan 1 in Sahara between November 2011 and February 2012. Comparison of the reprocessed SDR products with the original ones demonstrates improvements in the VIIRS calibration provided by the reprocessing. Since SNPP is the first satellite in a series that will form the Joint Polar Satellite System (JPSS), calibration methods developed for the SNPP VIIRS will also apply to the future JPSS measurements.
Satellite-derived potential evapotranspiration for distributed hydrologic runoff modeling
NASA Astrophysics Data System (ADS)
Spies, R. R.; Franz, K. J.; Bowman, A.; Hogue, T. S.; Kim, J.
2012-12-01
Distributed models have the ability of incorporating spatially variable data, especially high resolution forcing inputs such as precipitation, temperature and evapotranspiration in hydrologic modeling. Use of distributed hydrologic models for operational streamflow prediction has been partially hindered by a lack of readily available, spatially explicit input observations. Potential evapotranspiration (PET), for example, is currently accounted for through PET input grids that are based on monthly climatological values. The goal of this study is to assess the use of satellite-based PET estimates that represent the temporal and spatial variability, as input to the National Weather Service (NWS) Hydrology Laboratory Research Distributed Hydrologic Model (HL-RDHM). Daily PET grids are generated for six watersheds in the upper Mississippi River basin using a method that applies only MODIS satellite-based observations and the Priestly Taylor formula (MODIS-PET). The use of MODIS-PET grids will be tested against the use of the current climatological PET grids for simulating basin discharge. Gridded surface temperature forcing data are derived by applying the inverse distance weighting spatial prediction method to point-based station observations from the Automated Surface Observing System (ASOS) and Automated Weather Observing System (AWOS). Precipitation data are obtained from the Climate Prediction Center's (CPC) Climatology-Calibrated Precipitation Analysis (CCPA). A-priori gridded parameters for the Sacramento Soil Moisture Accounting Model (SAC-SMA), Snow-17 model, and routing model are initially obtained from the Office of Hydrologic Development and further calibrated using an automated approach. The potential of the MODIS-PET to be used in an operational distributed modeling system will be assessed with the long-term goal of promoting research to operations transfers and advancing the science of hydrologic forecasting.
Campbell, J Q; Coombs, D J; Rao, M; Rullkoetter, P J; Petrella, A J
2016-09-06
The purpose of this study was to seek broad verification and validation of human lumbar spine finite element models created using a previously published automated algorithm. The automated algorithm takes segmented CT scans of lumbar vertebrae, automatically identifies important landmarks and contact surfaces, and creates a finite element model. Mesh convergence was evaluated by examining changes in key output variables in response to mesh density. Semi-direct validation was performed by comparing experimental results for a single specimen to the automated finite element model results for that specimen with calibrated material properties from a prior study. Indirect validation was based on a comparison of results from automated finite element models of 18 individual specimens, all using one set of generalized material properties, to a range of data from the literature. A total of 216 simulations were run and compared to 186 experimental data ranges in all six primary bending modes up to 7.8Nm with follower loads up to 1000N. Mesh convergence results showed less than a 5% difference in key variables when the original mesh density was doubled. The semi-direct validation results showed that the automated method produced results comparable to manual finite element modeling methods. The indirect validation results showed a wide range of outcomes due to variations in the geometry alone. The studies showed that the automated models can be used to reliably evaluate lumbar spine biomechanics, specifically within our intended context of use: in pure bending modes, under relatively low non-injurious simulated in vivo loads, to predict torque rotation response, disc pressures, and facet forces. Copyright © 2016 Elsevier Ltd. All rights reserved.
Automated image analysis of alpha-particle autoradiographs of human bone
NASA Astrophysics Data System (ADS)
Hatzialekou, Urania; Henshaw, Denis L.; Fews, A. Peter
1988-01-01
Further techniques [4,5] for the analysis of CR-39 α-particle autoradiographs have been developed for application to α-autoradiography of autopsy bone at natural levels for exposure. The most significant new approach is the use of fully automated image analysis using a system developed in this laboratory. A 5 cm × 5 cm autoradiograph of tissue in which the activity is below 1 Bq kg -1 is scanned to both locate and measure the recorded α-particle tracks at a rate of 5 cm 2/h. Improved methods of calibration have also been developed. The techniques are described and in order to illustrate their application, a bone sample contaminated with 239Pu is analysed. Results from natural levels are the subject of a separate publication.
SpcAudace: Spectroscopic processing and analysis package of Audela software
NASA Astrophysics Data System (ADS)
Mauclaire, Benjamin
2017-11-01
SpcAudace processes long slit spectra with automated pipelines and performs astrophysical analysis of the latter data. These powerful pipelines do all the required steps in one pass: standard preprocessing, masking of bad pixels, geometric corrections, registration, optimized spectrum extraction, wavelength calibration and instrumental response computation and correction. Both high and low resolution long slit spectra are managed for stellar and non-stellar targets. Many types of publication-quality figures can be easily produced: pdf and png plots or annotated time series plots. Astrophysical quantities can be derived from individual or large amount of spectra with advanced functions: from line profile characteristics to equivalent width and periodogram. More than 300 documented functions are available and can be used into TCL scripts for automation. SpcAudace is based on Audela open source software.
Automatic classification of blank substrate defects
NASA Astrophysics Data System (ADS)
Boettiger, Tom; Buck, Peter; Paninjath, Sankaranarayanan; Pereira, Mark; Ronald, Rob; Rost, Dan; Samir, Bhamidipati
2014-10-01
Mask preparation stages are crucial in mask manufacturing, since this mask is to later act as a template for considerable number of dies on wafer. Defects on the initial blank substrate, and subsequent cleaned and coated substrates, can have a profound impact on the usability of the finished mask. This emphasizes the need for early and accurate identification of blank substrate defects and the risk they pose to the patterned reticle. While Automatic Defect Classification (ADC) is a well-developed technology for inspection and analysis of defects on patterned wafers and masks in the semiconductors industry, ADC for mask blanks is still in the early stages of adoption and development. Calibre ADC is a powerful analysis tool for fast, accurate, consistent and automatic classification of defects on mask blanks. Accurate, automated classification of mask blanks leads to better usability of blanks by enabling defect avoidance technologies during mask writing. Detailed information on blank defects can help to select appropriate job-decks to be written on the mask by defect avoidance tools [1][4][5]. Smart algorithms separate critical defects from the potentially large number of non-critical defects or false defects detected at various stages during mask blank preparation. Mechanisms used by Calibre ADC to identify and characterize defects include defect location and size, signal polarity (dark, bright) in both transmitted and reflected review images, distinguishing defect signals from background noise in defect images. The Calibre ADC engine then uses a decision tree to translate this information into a defect classification code. Using this automated process improves classification accuracy, repeatability and speed, while avoiding the subjectivity of human judgment compared to the alternative of manual defect classification by trained personnel [2]. This paper focuses on the results from the evaluation of Automatic Defect Classification (ADC) product at MP Mask Technology Center (MPMask). The Calibre ADC tool was qualified on production mask blanks against the manual classification. The classification accuracy of ADC is greater than 95% for critical defects with an overall accuracy of 90%. The sensitivity to weak defect signals and locating the defect in the images is a challenge we are resolving. The performance of the tool has been demonstrated on multiple mask types and is ready for deployment in full volume mask manufacturing production flow. Implementation of Calibre ADC is estimated to reduce the misclassification of critical defects by 60-80%.
Evaluation of Potential Evapotranspiration from a Hydrologic Model on a National Scale
NASA Astrophysics Data System (ADS)
Hakala, K. A.; Hay, L.; Markstrom, S. L.
2014-12-01
The US Geological Survey has developed a National Hydrologic Model (NHM) to support coordinated, comprehensive and consistent hydrologic model development and facilitate the application of simulations on the scale of the continental US. The NHM has a consistent geospatial fabric for modeling, consisting of over 100,000 hydrologic response units (HRUs). Each HRU requires accurate parameter estimates, some of which are attained from automated calibration. However, improved calibration can be achieved by initially utilizing as many parameters as possible from national data sets. This presentation investigates the effectiveness of calculating potential evapotranspiration (PET) parameters based on mean monthly values from the NOAA PET Atlas. Additional PET products are then used to evaluate the PET parameters. Effectively utilizing existing national-scale data sets can simplify the effort in establishing a robust NHM.
An automated and universal method for measuring mean grain size from a digital image of sediment
Buscombe, Daniel D.; Rubin, David M.; Warrick, Jonathan A.
2010-01-01
Existing methods for estimating mean grain size of sediment in an image require either complicated sequences of image processing (filtering, edge detection, segmentation, etc.) or statistical procedures involving calibration. We present a new approach which uses Fourier methods to calculate grain size directly from the image without requiring calibration. Based on analysis of over 450 images, we found the accuracy to be within approximately 16% across the full range from silt to pebbles. Accuracy is comparable to, or better than, existing digital methods. The new method, in conjunction with recent advances in technology for taking appropriate images of sediment in a range of natural environments, promises to revolutionize the logistics and speed at which grain-size data may be obtained from the field.
Light curves of flat-spectrum radio sources (Jenness+, 2010)
NASA Astrophysics Data System (ADS)
Jenness, T.; Robson, E. I.; Stevens, J. A.
2010-05-01
Calibrated data for 143 flat-spectrum extragalactic radio sources are presented at a wavelength of 850um covering a 5-yr period from 2000 April. The data, obtained at the James Clerk Maxwell Telescope using the Submillimetre Common-User Bolometer Array (SCUBA) camera in pointing mode, were analysed using an automated pipeline process based on the Observatory Reduction and Acquisition Control - Data Reduction (ORAC-DR) system. This paper describes the techniques used to analyse and calibrate the data, and presents the data base of results along with a representative sample of the better-sampled light curves. A re-analysis of previously published data from 1997 to 2000 is also presented. The combined catalogue, comprising 10493 flux density measurements, provides a unique and valuable resource for studies of extragalactic radio sources. (2 data files).
Bay of Fundy verification of a system for multidate Landsat measurement of suspended sediment
NASA Technical Reports Server (NTRS)
Munday, J. C., Jr.; Afoldi, T. T.; Amos, C. L.
1981-01-01
A system for automated multidate Landsat CCT MSS measurement of suspended sediment concentration (S) has been implemented and verified on nine sets (108 points) of data from the Bay of Fundy, Canada. The system employs 'chromaticity analysis' to provide automatic pixel-by-pixel adjustment of atmospheric variations, permitting reference calibration data from one or several dates to be spatially and temporally extrapolated to other regions and to other dates. For verification, each data set was used in turn as test data against the remainder as a calibration set: the average absolute error was 44 percent of S over the range 1-1000 mg/l. The system can be used to measure chlorophyll (in the absence of atmospheric variations), Secchi disk depth, and turbidity.
Advanced millimeter wave imaging systems
NASA Technical Reports Server (NTRS)
Schuchardt, J. M.; Gagliano, J. A.; Stratigos, J. A.; Webb, L. L.; Newton, J. M.
1980-01-01
Unique techniques are being utilized to develop self-contained imaging radiometers operating at single and multiple frequencies near 35, 95 and 183 GHz. These techniques include medium to large antennas for high spatial resolution, lowloss open structures for RF confinemnt and calibration, wide bandwidths for good sensitivity plus total automation of the unit operation and data collection. Applications include: detection of severe storms, imaging of motor vehicles, and the remote sensing of changes in material properties.
Oculomotor Reflexes as a Test of Visual Dysfunctions in Cognitively Impaired Observers
2012-10-01
visual nystagmus much more robust. Because the absolute gaze is not measured in our paradigm (this would require a gaze calibration, involving...the dots were also drifting to the right. Gaze horizontal position is plotted along the y-axis. The red bar indicates a visual nystagmus event...for automated 5 Reflex Stimulus Functions Visual Nystagmus luminance grating low-level motion equiluminant grating color vision contrast gratings at 3
1983-06-01
of this repat) U7NCLASSIFIED ISo. OECLASSI PICATION/i DOWNGRADING SCHEDULE IS, OIS? UUTION STATEMENT (fo Sie ftepoe) Approved for public release...26 1. GPETS Initial Outfitt ng (GINO) ..... 26 2. GPETE End Item Replacement (GEIR) * . . 27 D. GINO REQUIREMENTS DETERMINYATION .. . . . 28 E...interval of a sample of 305 GPETE items increased from 8.8 tc 13.6 months. The estimated annual savings resui- ng from this increase was 18.000
Figl, Michael; Ede, Christopher; Hummel, Johann; Wanschitz, Felix; Ewers, Rolf; Bergmann, Helmar; Birkfellner, Wolfgang
2005-11-01
Ever since the development of the first applications in image-guided therapy (IGT), the use of head-mounted displays (HMDs) was considered an important extension of existing IGT technologies. Several approaches to utilizing HMDs and modified medical devices for augmented reality (AR) visualization were implemented. These approaches include video-see through systems, semitransparent mirrors, modified endoscopes, and modified operating microscopes. Common to all these devices is the fact that a precise calibration between the display and three-dimensional coordinates in the patient's frame of reference is compulsory. In optical see-through devices based on complex optical systems such as operating microscopes or operating binoculars-as in the case of the system presented in this paper-this procedure can become increasingly difficult since precise camera calibration for every focus and zoom position is required. We present a method for fully automatic calibration of the operating binocular Varioscope M5 AR for the full range of zoom and focus settings available. Our method uses a special calibration pattern, a linear guide driven by a stepping motor, and special calibration software. The overlay error in the calibration plane was found to be 0.14-0.91 mm, which is less than 1% of the field of view. Using the motorized calibration rig as presented in the paper, we were also able to assess the dynamic latency when viewing augmentation graphics on a mobile target; spatial displacement due to latency was found to be in the range of 1.1-2.8 mm maximum, the disparity between the true object and its computed overlay represented latency of 0.1 s. We conclude that the automatic calibration method presented in this paper is sufficient in terms of accuracy and time requirements for standard uses of optical see-through systems in a clinical environment.
Procedure for the Selection and Validation of a Calibration Model I-Description and Application.
Desharnais, Brigitte; Camirand-Lemyre, Félix; Mireault, Pascal; Skinner, Cameron D
2017-05-01
Calibration model selection is required for all quantitative methods in toxicology and more broadly in bioanalysis. This typically involves selecting the equation order (quadratic or linear) and weighting factor correctly modelizing the data. A mis-selection of the calibration model will generate lower quality control (QC) accuracy, with an error up to 154%. Unfortunately, simple tools to perform this selection and tests to validate the resulting model are lacking. We present a stepwise, analyst-independent scheme for selection and validation of calibration models. The success rate of this scheme is on average 40% higher than a traditional "fit and check the QCs accuracy" method of selecting the calibration model. Moreover, the process was completely automated through a script (available in Supplemental Data 3) running in RStudio (free, open-source software). The need for weighting was assessed through an F-test using the variances of the upper limit of quantification and lower limit of quantification replicate measurements. When weighting was required, the choice between 1/x and 1/x2 was determined by calculating which option generated the smallest spread of weighted normalized variances. Finally, model order was selected through a partial F-test. The chosen calibration model was validated through Cramer-von Mises or Kolmogorov-Smirnov normality testing of the standardized residuals. Performance of the different tests was assessed using 50 simulated data sets per possible calibration model (e.g., linear-no weight, quadratic-no weight, linear-1/x, etc.). This first of two papers describes the tests, procedures and outcomes of the developed procedure using real LC-MS-MS results for the quantification of cocaine and naltrexone. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reyhan, M; Yue, N
Purpose: To validate an automated image processing algorithm designed to detect the center of radiochromic film used for in vivo film dosimetry against the current gold standard of manual selection. Methods: An image processing algorithm was developed to automatically select the region of interest (ROI) in *.tiff images that contain multiple pieces of radiochromic film (0.5x1.3cm{sup 2}). After a user has linked a calibration file to the processing algorithm and selected a *.tiff file for processing, an ROI is automatically detected for all films by a combination of thresholding and erosion, which removes edges and any additional markings for orientation.more » Calibration is applied to the mean pixel values from the ROIs and a *.tiff image is output displaying the original image with an overlay of the ROIs and the measured doses. Validation of the algorithm was determined by comparing in vivo dose determined using the current gold standard (manually drawn ROIs) versus automated ROIs for n=420 scanned films. Bland-Altman analysis, paired t-test, and linear regression were performed to demonstrate agreement between the processes. Results: The measured doses ranged from 0.2-886.6cGy. Bland-Altman analysis of the two techniques (automatic minus manual) revealed a bias of -0.28cGy and a 95% confidence interval of (5.5cGy,-6.1cGy). These values demonstrate excellent agreement between the two techniques. Paired t-test results showed no statistical differences between the two techniques, p=0.98. Linear regression with a forced zero intercept demonstrated that Automatic=0.997*Manual, with a Pearson correlation coefficient of 0.999. The minimal differences between the two techniques may be explained by the fact that the hand drawn ROIs were not identical to the automatically selected ones. The average processing time was 6.7seconds in Matlab on an IntelCore2Duo processor. Conclusion: An automated image processing algorithm has been developed and validated, which will help minimize user interaction and processing time of radiochromic film used for in vivo dosimetry.« less
Erel, Ozcan
2004-04-01
To develop a novel colorimetric and automated direct measurement method for total antioxidant capacity (TAC). A new generation, more stable, colored 2,2'-azinobis-(3-ethylbenzothiazoline-6-sulfonic acid radical cation (ABTS(*+)) was employed. The ABTS(*+) is decolorized by antioxidants according to their concentrations and antioxidant capacities. This change in color is measured as a change in absorbance at 660 nm. This process is applied to an automated analyzer and the assay is calibrated with Trolox. The novel assay is linear up to 6 mmol Trolox equivalent/l, its precision values are lower than 3%, and there is no interference from hemoglobin, bilirubin, EDTA, or citrate. The method developed is significantly correlated with the Randox- total antioxidant status (TAS) assay (r = 0.897, P < 0.0001; n = 91) and with the ferric reducing ability of plasma (FRAP) assay (r = 0.863, P < 0.0001; n = 110). Serum TAC level was lower in patients with major depression (1.69 +/- 0.11 mmol Trolox equivalent/l) than in healthy subjects (1.75 +/- 0.08 mmol Trolox equivalent/l, P = 0.041). This easy, stable, reliable, sensitive, inexpensive, and fully automated method described can be used to measure total antioxidant capacity.
NASA Technical Reports Server (NTRS)
Hook, Simon J.
2008-01-01
The presentation includes an introduction, Lake Tahoe site layout and measurements, Salton Sea site layout and measurements, field instrument calibration and cross-calculations, data reduction methodology and error budgets, and example results for MODIS. Summary and conclusions are: 1) Lake Tahoe CA/NV automated validation site was established in 1999 to assess radiometric accuracy of satellite and airborne mid and thermal infrared data and products. Water surface temperatures range from 4-25C.2) Salton Sea CA automated validation site was established in 2008 to broaden range of available water surface temperatures and atmospheric water vapor test cases. Water surface temperatures range from 15-35C. 3) Sites provide all information necessary for validation every 2 mins (bulk temperature, skin temperature, air temperature, wind speed, wind direction, net radiation, relative humidity). 4) Sites have been used to validate mid and thermal infrared data and products from: ASTER, AATSR, ATSR2, MODIS-Terra, MODIS-Aqua, Landsat 5, Landsat 7, MTI, TES, MASTER, MAS. 5) Approximately 10 years of data available to help validate AVHRR.
NASA Tech Briefs, December 2006
NASA Technical Reports Server (NTRS)
2006-01-01
Topic include: Inferring Gear Damage from Oil-Debris and Vibration Data; Forecasting of Storm-Surge Floods Using ADCIRC and Optimized DEMs; User Interactive Software for Analysis of Human Physiological Data; Representation of Serendipitous Scientific Data; Automatic Locking of Laser Frequency to an Absorption Peak; Self-Passivating Lithium/Solid Electrolyte/Iodine Cells; Four-Quadrant Analog Multipliers Using G4-FETs; Noise Source for Calibrating a Microwave Polarimeter; Hybrid Deployable Foam Antennas and Reflectors; Coating MCPs with AlN and GaN; Domed, 40-cm-Diameter Ion Optics for an Ion Thruster; Gesture-Controlled Interfaces for Self-Service Machines; Dynamically Alterable Arrays of Polymorphic Data Types; Identifying Trends in Deep Space Network Monitor Data; Predicting Lifetime of a Thermomechanically Loaded Component; Partial Automation of Requirements Tracing; Automated Synthesis of Architecture of Avionic Systems; SSRL Emergency Response Shore Tool; Wholly Aromatic Ether-Imides as n-Type Semiconductors; Carbon-Nanotube-Carpet Heat-Transfer Pads; Pulse-Flow Microencapsulation System; Automated Low-Gravitation Facility Would Make Optical Fibers; Alignment Cube with One Diffractive Face; Graphite Composite Booms with Integral Hinges; Tool for Sampling Permafrost on a Remote Planet; and Special Semaphore Scheme for UHF Spacecraft Communications.
Automatic Calibration of Stereo-Cameras Using Ordinary Chess-Board Patterns
NASA Astrophysics Data System (ADS)
Prokos, A.; Kalisperakis, I.; Petsa, E.; Karras, G.
2012-07-01
Automation of camera calibration is facilitated by recording coded 2D patterns. Our toolbox for automatic camera calibration using images of simple chess-board patterns is freely available on the Internet. But it is unsuitable for stereo-cameras whose calibration implies recovering camera geometry and their true-to-scale relative orientation. In contrast to all reported methods requiring additional specific coding to establish an object space coordinate system, a toolbox for automatic stereo-camera calibration relying on ordinary chess-board patterns is presented here. First, the camera calibration algorithm is applied to all image pairs of the pattern to extract nodes of known spacing, order them in rows and columns, and estimate two independent camera parameter sets. The actual node correspondences on stereo-pairs remain unknown. Image pairs of a textured 3D scene are exploited for finding the fundamental matrix of the stereo-camera by applying RANSAC to point matches established with the SIFT algorithm. A node is then selected near the centre of the left image; its match on the right image is assumed as the node closest to the corresponding epipolar line. This yields matches for all nodes (since these have already been ordered), which should also satisfy the 2D epipolar geometry. Measures for avoiding mismatching are taken. With automatically estimated initial orientation values, a bundle adjustment is performed constraining all pairs on a common (scaled) relative orientation. Ambiguities regarding the actual exterior orientations of the stereo-camera with respect to the pattern are irrelevant. Results from this automatic method show typical precisions not above 1/4 pixels for 640×480 web cameras.
A parallel calibration utility for WRF-Hydro on high performance computers
NASA Astrophysics Data System (ADS)
Wang, J.; Wang, C.; Kotamarthi, V. R.
2017-12-01
A successful modeling of complex hydrological processes comprises establishing an integrated hydrological model which simulates the hydrological processes in each water regime, calibrates and validates the model performance based on observation data, and estimates the uncertainties from different sources especially those associated with parameters. Such a model system requires large computing resources and often have to be run on High Performance Computers (HPC). The recently developed WRF-Hydro modeling system provides a significant advancement in the capability to simulate regional water cycles more completely. The WRF-Hydro model has a large range of parameters such as those in the input table files — GENPARM.TBL, SOILPARM.TBL and CHANPARM.TBL — and several distributed scaling factors such as OVROUGHRTFAC. These parameters affect the behavior and outputs of the model and thus may need to be calibrated against the observations in order to obtain a good modeling performance. Having a parameter calibration tool specifically for automate calibration and uncertainty estimates of WRF-Hydro model can provide significant convenience for the modeling community. In this study, we developed a customized tool using the parallel version of the model-independent parameter estimation and uncertainty analysis tool, PEST, to enabled it to run on HPC with PBS and SLURM workload manager and job scheduler. We also developed a series of PEST input file templates that are specifically for WRF-Hydro model calibration and uncertainty analysis. Here we will present a flood case study occurred in April 2013 over Midwest. The sensitivity and uncertainties are analyzed using the customized PEST tool we developed.
NASA Astrophysics Data System (ADS)
Bilardi, S.; Barjatya, A.; Gasdia, F.
OSCOM, Optical tracking and Spectral characterization of CubeSats for Operational Missions, is a system capable of providing time-resolved satellite photometry using commercial-off-the-shelf (COTS) hardware and custom tracking and analysis software. This system has acquired photometry of objects as small as CubeSats using a Celestron 11” RASA and an inexpensive CMOS machine vision camera. For satellites with known shapes, these light curves can be used to verify a satellite’s attitude and the state of its deployed solar panels or antennae. While the OSCOM system can successfully track satellites and produce light curves, there is ongoing improvement towards increasing its automation while supporting additional mounts and telescopes. A newly acquired Celestron 14” Edge HD can be used with a Starizona Hyperstar to increase the SNR for small objects as well as extend beyond the limiting magnitude of the 11” RASA. OSCOM currently corrects instrumental brightness measurements for satellite range and observatory site average atmospheric extinction, but calibrated absolute brightness is required to determine information about satellites other than their spin rate, such as surface albedo. A calibration method that automatically detects and identifies background stars can use their catalog magnitudes to calibrate the brightness of the satellite in the image. We present a photometric light curve from both the 14” Edge HD and 11” RASA optical systems as well as plans for a calibration method that will perform background star photometry to efficiently determine calibrated satellite brightness in each frame.
Qin, Yuhong; Zhang, Jingru; Zhang, Yuan; Li, Fangbing; Han, Yongtao; Zou, Nan; Xu, Haowei; Qian, Meiyuan; Pan, Canping
2016-09-02
An automated multi-plug filtration cleanup (m-PFC) method on modified QuEChERS (quick, easy, cheap, effective, rugged, and safe) extracts was developed. The automatic device was aimed to reduce labor-consuming manual operation workload in the cleanup steps. It could control the volume and the speed of pulling and pushing cycles accurately. In this work, m-PFC was based on multi-walled carbon nanotubes (MWCNTs) mixed with other sorbents and anhydrous magnesium sulfate (MgSO4) in a packed tip for analysis of pesticide multi-residues in crop commodities followed by liquid chromatography with tandem mass spectrometric (LC-MS/MS) detection. It was validated by analyzing 25 pesticides in six representative matrices spiked at two concentration levels of 10 and 100μg/kg. Salts, sorbents, m-PFC procedure, automated pulling and pushing volume, automated pulling speed, and pushing speed for each matrix were optimized. After optimization, two general automated m-PFC methods were introduced to relatively simple (apple, citrus fruit, peanut) and relatively complex (spinach, leek, green tea) matrices. Spike recoveries were within 83 and 108% and 1-14% RSD for most analytes in the tested matrices. Matrix-matched calibrations were performed with the coefficients of determination >0.997 between concentration levels of 10 and 1000μg/kg. The developed method was successfully applied to the determination of pesticide residues in market samples. Copyright © 2016 Elsevier B.V. All rights reserved.
Smart System for Bicarbonate Control in Irrigation for Hydroponic Precision Farming
Cambra, Carlos; Lacuesta, Raquel
2018-01-01
Improving the sustainability in agriculture is nowadays an important challenge. The automation of irrigation processes via low-cost sensors can to spread technological advances in a sector very influenced by economical costs. This article presents an auto-calibrated pH sensor able to detect and adjust the imbalances in the pH levels of the nutrient solution used in hydroponic agriculture. The sensor is composed by a pH probe and a set of micropumps that sequentially pour the different liquid solutions to maintain the sensor calibration and the water samples from the channels that contain the nutrient solution. To implement our architecture, we use an auto-calibrated pH sensor connected to a wireless node. Several nodes compose our wireless sensor networks (WSN) to control our greenhouse. The sensors periodically measure the pH level of each hydroponic support and send the information to a data base (DB) which stores and analyzes the data to warn farmers about the measures. The data can then be accessed through a user-friendly, web-based interface that can be accessed through the Internet by using desktop or mobile devices. This paper also shows the design and test bench for both the auto-calibrated pH sensor and the wireless network to check their correct operation. PMID:29693611
Smart System for Bicarbonate Control in Irrigation for Hydroponic Precision Farming.
Cambra, Carlos; Sendra, Sandra; Lloret, Jaime; Lacuesta, Raquel
2018-04-25
Improving the sustainability in agriculture is nowadays an important challenge. The automation of irrigation processes via low-cost sensors can to spread technological advances in a sector very influenced by economical costs. This article presents an auto-calibrated pH sensor able to detect and adjust the imbalances in the pH levels of the nutrient solution used in hydroponic agriculture. The sensor is composed by a pH probe and a set of micropumps that sequentially pour the different liquid solutions to maintain the sensor calibration and the water samples from the channels that contain the nutrient solution. To implement our architecture, we use an auto-calibrated pH sensor connected to a wireless node. Several nodes compose our wireless sensor networks (WSN) to control our greenhouse. The sensors periodically measure the pH level of each hydroponic support and send the information to a data base (DB) which stores and analyzes the data to warn farmers about the measures. The data can then be accessed through a user-friendly, web-based interface that can be accessed through the Internet by using desktop or mobile devices. This paper also shows the design and test bench for both the auto-calibrated pH sensor and the wireless network to check their correct operation.
NASA Astrophysics Data System (ADS)
den Hollander, Richard J. M.; Bouma, Henri; Baan, Jan; Eendebak, Pieter T.; van Rest, Jeroen H. C.
2015-10-01
Person tracking across non-overlapping cameras and other types of video analytics benefit from spatial calibration information that allows an estimation of the distance between cameras and a relation between pixel coordinates and world coordinates within a camera. In a large environment with many cameras, or for frequent ad-hoc deployments of cameras, the cost of this calibration is high. This creates a barrier for the use of video analytics. Automating the calibration allows for a short configuration time, and the use of video analytics in a wider range of scenarios, including ad-hoc crisis situations and large scale surveillance systems. We show an autocalibration method entirely based on pedestrian detections in surveillance video in multiple non-overlapping cameras. In this paper, we show the two main components of automatic calibration. The first shows the intra-camera geometry estimation that leads to an estimate of the tilt angle, focal length and camera height, which is important for the conversion from pixels to meters and vice versa. The second component shows the inter-camera topology inference that leads to an estimate of the distance between cameras, which is important for spatio-temporal analysis of multi-camera tracking. This paper describes each of these methods and provides results on realistic video data.
CALL FOR PAPERS: 13th International Conference on Force and Mass Measurement
NASA Astrophysics Data System (ADS)
1992-01-01
10 14 May 1993, Helsinki Fair Centre, Finland Scope of the Conference The Conference reports and reviews the state of the art and future trends in force and mass measurements in science and industry. Emphasis is on the applications of new methods, current problems in calibration and quality control, as well as on advancements in new sensor technologies and industrial application of force and mass measurements. Main Themes and Topics 1. The state of the art and development trends in force and mass measurements Development and stability of high level mass standards Mass comparators and force standard machine New research topics in mass and force 2. Calibration and quality control Calibration methods Estimation of uncertainties and classification of accuracies Relations between calibration, testing and quality control Requirements for quality control Verification of weighing instruments and their main devices 3. Application of force and mass measurements Automatic weighing Mass flow measurements Quality control in process industry Sensor technologies Practical applications Special applications in industry, trade, etc. Deadline for submission of abstracts: 30 June 1992. For further information please contact: Finnish Society of Automation, Asemapäällikönkatu 12C, SF-00520 HELSINKI, Finland Phone: Int. +3580 1461 644, Fax: Int. +3580 1461 650
Teleoperation experiments with a Utah/MIT hand and a VPL DataGlove
NASA Technical Reports Server (NTRS)
Clark, D.; Demmel, J.; Hong, J.; Lafferriere, Gerardo; Salkind, L.; Tan, X.
1989-01-01
A teleoperation system capable of controlling a Utah/MIT Dextrous Hand using a VPL DataGlove as a master is presented. Additionally the system is capable of running the dextrous hand in robotic (autonomous) mode as new programs are developed. The software and hardware architecture used is presented and the experiments performed are described. The communication and calibration issues involved are analyzed and applications to the analysis and development of automated dextrous manipulations are investigated.
1988-04-01
2M - sub go, or empNm, anl arn rftle a dm sIected rme 4( eserih (AM’ me (a s -el. Useful f(o work ia m Joh. wft um bemiauris. * EF az Q~, The guss...with calibrated eauior-er 3 Recorder Honeywell ENV 6047 D 4-20-87 10-20-87 Sand and Dust Recorder Leeds & Northrop E 6034 D 4-15-87 10-15-87 Anemometer
Design and analysis of an automatic method of measuring silicon-controlled-rectifier holding current
NASA Technical Reports Server (NTRS)
Maslowski, E. A.
1971-01-01
The design of an automated SCR holding-current measurement system is described. The circuits used in the measurement system were designed to meet the major requirements of automatic data acquisition, reliability, and repeatability. Performance data are presented and compared with calibration data. The data verified the accuracy of the measurement system. Data taken over a 48-hr period showed that the measurement system operated satisfactorily and met all the design requirements.
Correction of microplate location effects improves performance of the thrombin generation test
2013-01-01
Background Microplate-based thrombin generation test (TGT) is widely used as clinical measure of global hemostatic potential and it becomes a useful tool for control of drug potency and quality by drug manufactures. However, the convenience of the microtiter plate technology can be deceiving: microplate assays are prone to location-based variability in different parts of the microtiter plate. Methods In this report, we evaluated the well-to-well consistency of the TGT variant specifically applied to the quantitative detection of the thrombogenic substances in the immune globulin product. We also studied the utility of previously described microplate layout designs in the TGT experiment. Results Location of the sample on the microplate (location effect) contributes to the variability of TGT measurements. Use of manual pipetting techniques and applications of the TGT to the evaluation of procoagulant enzymatic substances are especially sensitive. The effects were not sensitive to temperature or choice of microplate reader. Smallest location effects were observed with automated dispenser-based calibrated thrombogram instrument. Even for an automated instrument, the use of calibration curve resulted in up to 30% bias in thrombogenic potency assignment. Conclusions Use of symmetrical version of the strip-plot layout was demonstrated to help to minimize location artifacts even under the worst-case conditions. Strip-plot layouts are required for quantitative thrombin-generation based bioassays used in the biotechnological field. PMID:23829491
Correction of microplate location effects improves performance of the thrombin generation test.
Liang, Yideng; Woodle, Samuel A; Shibeko, Alexey M; Lee, Timothy K; Ovanesov, Mikhail V
2013-07-05
Microplate-based thrombin generation test (TGT) is widely used as clinical measure of global hemostatic potential and it becomes a useful tool for control of drug potency and quality by drug manufactures. However, the convenience of the microtiter plate technology can be deceiving: microplate assays are prone to location-based variability in different parts of the microtiter plate. In this report, we evaluated the well-to-well consistency of the TGT variant specifically applied to the quantitative detection of the thrombogenic substances in the immune globulin product. We also studied the utility of previously described microplate layout designs in the TGT experiment. Location of the sample on the microplate (location effect) contributes to the variability of TGT measurements. Use of manual pipetting techniques and applications of the TGT to the evaluation of procoagulant enzymatic substances are especially sensitive. The effects were not sensitive to temperature or choice of microplate reader. Smallest location effects were observed with automated dispenser-based calibrated thrombogram instrument. Even for an automated instrument, the use of calibration curve resulted in up to 30% bias in thrombogenic potency assignment. Use of symmetrical version of the strip-plot layout was demonstrated to help to minimize location artifacts even under the worst-case conditions. Strip-plot layouts are required for quantitative thrombin-generation based bioassays used in the biotechnological field.
CCD Strömvil Photometry of M 37
NASA Astrophysics Data System (ADS)
Boyle, R. P.; Janusz, R.; Kazlauskas, A.; Philip, A. G. Davis
2001-12-01
We have been working on a program of setting up standards in the Strömvil photometric system and have been doing CCD photometry of globular and open clusters. A previous paper (Boyle et al. BAAS, AAS Meeting #193, #68.08) described the results of observations made in the open cluster M 67, which we are setting up as one of the prime standard fields for Strömvil photometry. Now we discuss our observations of M 37, made on the Vatican Advanced Technology Telescope on Mt. Graham, Arizona. One of us (R.J.) has automated the data processing by a novel method. The Strömvil group is multinational. By use of this innovative automated, yet interactive processing method, one systematically applies the same processing steps to run in IRAF by capturing them as presented in html files and submitting them to the IRAF command language. Use of the mouse avoids errors and accelerates the processing from raw data frames to calibrated photometry. From several G2 V stars in M 67 we have calculated their mean color indices and compare them to stars in M 37 to identify candidate G2 V stars there. Identifying such stars relates to the search for terrestrial exoplanets. Ultimately we will use the calibrated Strömvil indices to make photometric determinations of log g and Teff.
Jordt, Anne; Zelenka, Claudius; von Deimling, Jens Schneider; Koch, Reinhard; Köser, Kevin
2015-12-05
Several acoustic and optical techniques have been used for characterizing natural and anthropogenic gas leaks (carbon dioxide, methane) from the ocean floor. Here, single-camera based methods for bubble stream observation have become an important tool, as they help estimating flux and bubble sizes under certain assumptions. However, they record only a projection of a bubble into the camera and therefore cannot capture the full 3D shape, which is particularly important for larger, non-spherical bubbles. The unknown distance of the bubble to the camera (making it appear larger or smaller than expected) as well as refraction at the camera interface introduce extra uncertainties. In this article, we introduce our wide baseline stereo-camera deep-sea sensor bubble box that overcomes these limitations, as it observes bubbles from two orthogonal directions using calibrated cameras. Besides the setup and the hardware of the system, we discuss appropriate calibration and the different automated processing steps deblurring, detection, tracking, and 3D fitting that are crucial to arrive at a 3D ellipsoidal shape and rise speed of each bubble. The obtained values for single bubbles can be aggregated into statistical bubble size distributions or fluxes for extrapolation based on diffusion and dissolution models and large scale acoustic surveys. We demonstrate and evaluate the wide baseline stereo measurement model using a controlled test setup with ground truth information.
Fast targeted analysis of 132 acidic and neutral drugs and poisons in whole blood using LC-MS/MS.
Di Rago, Matthew; Saar, Eva; Rodda, Luke N; Turfus, Sophie; Kotsos, Alex; Gerostamoulos, Dimitri; Drummer, Olaf H
2014-10-01
The aim of this study was to develop an LC-MS/MS based screening technique that covers a broad range of acidic and neutral drugs and poisons by combining a small sample volume and efficient extraction technique with simple automated data processing. After protein precipitation of 100μL of whole blood, 132 common acidic and neutral drugs and poisons including non-steroidal anti-inflammatory drugs, barbiturates, anticonvulsants, antidiabetics, muscle relaxants, diuretics and superwarfarin rodenticides (47 quantitated, 85 reported as detected) were separated using a Shimadzu Prominence HPLC system with a C18 separation column (Kinetex XB-C18, 4.6mm×150mm, 5μm), using gradient elution with a mobile phase of 25mM ammonium acetate buffer (pH 7.5)/acetonitrile. The drugs were detected using an ABSciex(®) API 2000 LC-MS/MS system (ESI+ and -, MRM mode, two transitions per analyte). The method was fully validated in accordance with international guidelines. Quantification data obtained using one-point calibration compared favorably to that using multiple calibrants. The presented LC-MS/MS assay has proven to be applicable for determination of the analytes in blood. The fast and reliable extraction method combined with automated processing gives the opportunity for high throughput and fast turnaround times for forensic and clinical toxicology. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Jordt, Anne; Zelenka, Claudius; Schneider von Deimling, Jens; Koch, Reinhard; Köser, Kevin
2015-01-01
Several acoustic and optical techniques have been used for characterizing natural and anthropogenic gas leaks (carbon dioxide, methane) from the ocean floor. Here, single-camera based methods for bubble stream observation have become an important tool, as they help estimating flux and bubble sizes under certain assumptions. However, they record only a projection of a bubble into the camera and therefore cannot capture the full 3D shape, which is particularly important for larger, non-spherical bubbles. The unknown distance of the bubble to the camera (making it appear larger or smaller than expected) as well as refraction at the camera interface introduce extra uncertainties. In this article, we introduce our wide baseline stereo-camera deep-sea sensor bubble box that overcomes these limitations, as it observes bubbles from two orthogonal directions using calibrated cameras. Besides the setup and the hardware of the system, we discuss appropriate calibration and the different automated processing steps deblurring, detection, tracking, and 3D fitting that are crucial to arrive at a 3D ellipsoidal shape and rise speed of each bubble. The obtained values for single bubbles can be aggregated into statistical bubble size distributions or fluxes for extrapolation based on diffusion and dissolution models and large scale acoustic surveys. We demonstrate and evaluate the wide baseline stereo measurement model using a controlled test setup with ground truth information. PMID:26690168
FY2017 Report on NISC Measurements and Detector Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andrews, Madison Theresa; Meierbachtol, Krista Cruse; Jordan, Tyler Alexander
FY17 work focused on automation, both of the measurement analysis and comparison of simulations. The experimental apparatus was relocated and weeks of continuous measurements of the spontaneous fission source 252Cf was performed. Programs were developed to automate the conversion of measurements into ROOT data framework files with a simple terminal input. The complete analysis of the measurement (which includes energy calibration and the identification of correlated counts) can now be completed with a documented process which involves one simple execution line as well. Finally, the hurdles of slow MCNP simulations resulting in low simulation statistics have been overcome with themore » generation of multi-run suites which make use of the highperformance computing resources at LANL. Preliminary comparisons of measurements and simulations have been performed and will be the focus of FY18 work.« less
BATSE imaging survey of the Galactic plane
NASA Technical Reports Server (NTRS)
Grindlay, J. E.; Barret, D.; Bloser, P. F.; Zhang, S. N.; Robinson, C.; Harmon, B. A.
1997-01-01
The burst and transient source experiment (BATSE) onboard the Compton Gamma Ray Observatory (CGRO) provides all sky monitoring capability, occultation analysis and occultation imaging which enables new and fainter sources to be searched for in relatively crowded fields. The occultation imaging technique is used in combination with an automated BATSE image scanner, allowing an analysis of large data sets of occultation images for detections of candidate sources and for the construction of source catalogs and data bases. This automated image scanner system is being tested on archival data in order to optimize the search and detection thresholds. The image search system, its calibration results and preliminary survey results on archival data are reported on. The aim of the survey is to identify a complete sample of black hole candidates in the galaxy and constrain the number of black hole systems and neutron star systems.
The NANOGrav Observing Program: Automation and Reproducibility
NASA Astrophysics Data System (ADS)
Brazier, Adam; Cordes, James; Demorest, Paul; Dolch, Timothy; Ferdman, Robert; Garver-Daniels, Nathaniel; Hawkins, Steven; Lam, Michael Timothy; Lazio, T. Joseph W.
2018-01-01
The NANOGrav Observing Program is a decades-long search for gravitational waves using pulsar timing which relies, for its sensitivity, on large data sets from observations of many pulsars. These are constructed through an intensive, long-term observing campaign. The nature of the program requires automation in the transfer and archiving of the large volume of raw telescope data, the calibration of those data, and making these resulting data products—required for diagnostic and data exploration purposes—available to NANOGrav members. Reproducibility of results is a key goal in this project, and essential to its success; it requires treating the software itself as a data product of the research, while ensuring easy access by, and collaboration between, members of NANOGrav, the International Pulsar Timing Array consortium (of which NANOGrav is a key member), as well as the wider astronomy community and the public.
NASA Astrophysics Data System (ADS)
Fezzani, Ridha; Berger, Laurent
2018-06-01
An automated signal-based method was developed in order to analyse the seafloor backscatter data logged by calibrated multibeam echosounder. The processing consists first in the clustering of each survey sub-area into a small number of homogeneous sediment types, based on the backscatter average level at one or several incidence angles. Second, it uses their local average angular response to extract discriminant descriptors, obtained by fitting the field data to the Generic Seafloor Acoustic Backscatter parametric model. Third, the descriptors are used for seafloor type classification. The method was tested on the multi-year data recorded by a calibrated 90-kHz Simrad ME70 multibeam sonar operated in the Bay of Biscay, France and Celtic Sea, Ireland. It was applied for seafloor-type classification into 12 classes, to a dataset of 158 spots surveyed for demersal and benthic fauna study and monitoring. Qualitative analyses and classified clusters using extracted parameters show a good discriminatory potential, indicating the robustness of this approach.
Developing of an automation for therapy dosimetry systems by using labview software
NASA Astrophysics Data System (ADS)
Aydin, Selim; Kam, Erol
2018-06-01
Traceability, accuracy and consistency of radiation measurements are essential in radiation dosimetry, particularly in radiotherapy, where the outcome of treatments is highly dependent on the radiation dose delivered to patients. Therefore it is very important to provide reliable, accurate and fast calibration services for therapy dosimeters since the radiation dose delivered to a radiotherapy patient is directly related to accuracy and reliability of these devices. In this study, we report the performance of in-house developed computer controlled data acquisition and monitoring software for the commercially available radiation therapy electrometers. LabVIEW® software suite is used to provide reliable, fast and accurate calibration services. The software also collects environmental data such as temperature, pressure and humidity in order to use to use these them in correction factor calculations. By using this software tool, a better control over the calibration process is achieved and the need for human intervention is reduced. This is the first software that can control frequently used dosimeter systems, in radiation thereapy field at hospitals, such as Unidos Webline, Unidos E, Dose-1 and PC Electrometers.
NASA Astrophysics Data System (ADS)
Li, You Yun; Tsai, DeChang; Hwang, Weng Sing
2008-06-01
The purpose of this study is to develop a technique of numerically simulating the microstructure of 17-4PH (precipitation hardening) stainless steel during investment casting. A cellular automation (CA) algorithm was adopted to simulate the nucleation and grain growth. First a calibration casting was made, and then by comparing the microstructures of the calibration casting with those simulated using different kinetic growth coefficients (a2, a3) in CA, the most appropriate set of values for a2 and a3 would be obtained. Then, this set of values was applied to the microstructure simulation of a separate casting, where the casting was actually made. Through this approach, this study has arrived at a set of growth kinetic coefficients from the calibration casting: a2 is 2.9 × 10-5, a3 is 1.49 × 10-7, which is then used to predict the microstructure of the other test casting. Consequently, a good correlation has been found between the microstructure of actual 17-4PH casting and the simulation result.
MacFarlane, Michael; Wong, Daniel; Hoover, Douglas A; Wong, Eugene; Johnson, Carol; Battista, Jerry J; Chen, Jeff Z
2018-03-01
In this work, we propose a new method of calibrating cone beam computed tomography (CBCT) data sets for radiotherapy dose calculation and plan assessment. The motivation for this patient-specific calibration (PSC) method is to develop an efficient, robust, and accurate CBCT calibration process that is less susceptible to deformable image registration (DIR) errors. Instead of mapping the CT numbers voxel-by-voxel with traditional DIR calibration methods, the PSC methods generates correlation plots between deformably registered planning CT and CBCT voxel values, for each image slice. A linear calibration curve specific to each slice is then obtained by least-squares fitting, and applied to the CBCT slice's voxel values. This allows each CBCT slice to be corrected using DIR without altering the patient geometry through regional DIR errors. A retrospective study was performed on 15 head-and-neck cancer patients, each having routine CBCTs and a middle-of-treatment re-planning CT (reCT). The original treatment plan was re-calculated on the patient's reCT image set (serving as the gold standard) as well as the image sets produced by voxel-to-voxel DIR, density-overriding, and the new PSC calibration methods. Dose accuracy of each calibration method was compared to the reference reCT data set using common dose-volume metrics and 3D gamma analysis. A phantom study was also performed to assess the accuracy of the DIR and PSC CBCT calibration methods compared with planning CT. Compared with the gold standard using reCT, the average dose metric differences were ≤ 1.1% for all three methods (PSC: -0.3%; DIR: -0.7%; density-override: -1.1%). The average gamma pass rates with thresholds 3%, 3 mm were also similar among the three techniques (PSC: 95.0%; DIR: 96.1%; density-override: 94.4%). An automated patient-specific calibration method was developed which yielded strong dosimetric agreement with the results obtained using a re-planning CT for head-and-neck patients. © 2018 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.
Evaluation of Potential Evapotranspiration from a Hydrologic Model on a National Scale
NASA Astrophysics Data System (ADS)
Hakala, Kirsti; Markstrom, Steven; Hay, Lauren
2015-04-01
The U.S. Geological Survey has developed a National Hydrologic Model (NHM) to support coordinated, comprehensive and consistent hydrologic model development and facilitate the application of simulations on the scale of the continental U.S. The NHM has a consistent geospatial fabric for modeling, consisting of over 100,000 hydrologic response units HRUs). Each HRU requires accurate parameter estimates, some of which are attained from automated calibration. However, improved calibration can be achieved by initially utilizing as many parameters as possible from national data sets. This presentation investigates the effectiveness of calculating potential evapotranspiration (PET) parameters based on mean monthly values from the NOAA PET Atlas. Additional PET products are then used to evaluate the PET parameters. Effectively utilizing existing national-scale data sets can simplify the effort in establishing a robust NHM.
An Automatic Image-Based Modelling Method Applied to Forensic Infography
Zancajo-Blazquez, Sandra; Gonzalez-Aguilera, Diego; Gonzalez-Jorge, Higinio; Hernandez-Lopez, David
2015-01-01
This paper presents a new method based on 3D reconstruction from images that demonstrates the utility and integration of close-range photogrammetry and computer vision as an efficient alternative to modelling complex objects and scenarios of forensic infography. The results obtained confirm the validity of the method compared to other existing alternatives as it guarantees the following: (i) flexibility, permitting work with any type of camera (calibrated and non-calibrated, smartphone or tablet) and image (visible, infrared, thermal, etc.); (ii) automation, allowing the reconstruction of three-dimensional scenarios in the absence of manual intervention, and (iii) high quality results, sometimes providing higher resolution than modern laser scanning systems. As a result, each ocular inspection of a crime scene with any camera performed by the scientific police can be transformed into a scaled 3d model. PMID:25793628
Numerical computation of Pop plot
DOE Office of Scientific and Technical Information (OSTI.GOV)
Menikoff, Ralph
The Pop plot — distance-of-run to detonation versus initial shock pressure — is a key characterization of shock initiation in a heterogeneous explosive. Reactive burn models for high explosives (HE) must reproduce the experimental Pop plot to have any chance of accurately predicting shock initiation phenomena. This report describes a methodology for automating the computation of a Pop plot for a specific explosive with a given HE model. Illustrative examples of the computation are shown for PBX 9502 with three burn models (SURF, WSD and Forest Fire) utilizing the xRage code, which is the Eulerian ASC hydrocode at LANL. Comparisonmore » of the numerical and experimental Pop plot can be the basis for a validation test or as an aid in calibrating the burn rate of an HE model. Issues with calibration are discussed.« less
Distribution system model calibration with big data from AMI and PV inverters
Peppanen, Jouni; Reno, Matthew J.; Broderick, Robert J.; ...
2016-03-03
Efficient management and coordination of distributed energy resources with advanced automation schemes requires accurate distribution system modeling and monitoring. Big data from smart meters and photovoltaic (PV) micro-inverters can be leveraged to calibrate existing utility models. This paper presents computationally efficient distribution system parameter estimation algorithms to improve the accuracy of existing utility feeder radial secondary circuit model parameters. The method is demonstrated using a real utility feeder model with advanced metering infrastructure (AMI) and PV micro-inverters, along with alternative parameter estimation approaches that can be used to improve secondary circuit models when limited measurement data is available. Lastly, themore » parameter estimation accuracy is demonstrated for both a three-phase test circuit with typical secondary circuit topologies and single-phase secondary circuits in a real mixed-phase test system.« less
Distribution system model calibration with big data from AMI and PV inverters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peppanen, Jouni; Reno, Matthew J.; Broderick, Robert J.
Efficient management and coordination of distributed energy resources with advanced automation schemes requires accurate distribution system modeling and monitoring. Big data from smart meters and photovoltaic (PV) micro-inverters can be leveraged to calibrate existing utility models. This paper presents computationally efficient distribution system parameter estimation algorithms to improve the accuracy of existing utility feeder radial secondary circuit model parameters. The method is demonstrated using a real utility feeder model with advanced metering infrastructure (AMI) and PV micro-inverters, along with alternative parameter estimation approaches that can be used to improve secondary circuit models when limited measurement data is available. Lastly, themore » parameter estimation accuracy is demonstrated for both a three-phase test circuit with typical secondary circuit topologies and single-phase secondary circuits in a real mixed-phase test system.« less
Monitoring forest land from high altitude and from space
NASA Technical Reports Server (NTRS)
1972-01-01
The significant findings are reported for remote sensing of forest lands conducted during the period October 1, 1965 to December 31, 1972. Forest inventory research included the use of aircraft and space imagery for forest and nonforest land classification, and land use classification by automated procedures, multispectral scanning, and computerized mapping. Forest stress studies involved previsual detection of ponderosa pine under stress from insects and disease, bark bettle infestations in the Black Hills, and root disease impacts on forest stands. Standardization and calibration studies were made to develop a field test of an ERTS-matched four-channel spectrometer. Calibration of focal plane shutters and mathematical modeling of film characteristic curves were also studied. Documents published as a result of all forestry studies funded by NASA for the Earth Resources Survey Program from 1965 through 1972 are listed.
Temporal Analysis and Automatic Calibration of the Velodyne HDL-32E LiDAR System
NASA Astrophysics Data System (ADS)
Chan, T. O.; Lichti, D. D.; Belton, D.
2013-10-01
At the end of the first quarter of 2012, more than 600 Velodyne LiDAR systems had been sold worldwide for various robotic and high-accuracy survey applications. The ultra-compact Velodyne HDL-32E LiDAR has become a predominant sensor for many applications that require lower sensor size/weight and cost. For high accuracy applications, cost-effective calibration methods with minimal manual intervention are always desired by users. However, the calibrations are complicated by the Velodyne LiDAR's narrow vertical field of view and the very highly time-variant nature of its measurements. In the paper, the temporal stability of the HDL-32E is first analysed as the motivation for developing a new, automated calibration method. This is followed by a detailed description of the calibration method that is driven by a novel segmentation method for extracting vertical cylindrical features from the Velodyne point clouds. The proposed segmentation method utilizes the Velodyne point cloud's slice-like nature and first decomposes the point clouds into 2D layers. Then the layers are treated as 2D images and are processed with the Generalized Hough Transform which extracts the points distributed in circular patterns from the point cloud layers. Subsequently, the vertical cylindrical features can be readily extracted from the whole point clouds based on the previously extracted points. The points are passed to the calibration that estimates the cylinder parameters and the LiDAR's additional parameters simultaneously by constraining the segmented points to fit to the cylindrical geometric model in such a way the weighted sum of the adjustment residuals are minimized. The proposed calibration is highly automatic and this allows end users to obtain the time-variant additional parameters instantly and frequently whenever there are vertical cylindrical features presenting in scenes. The methods were verified with two different real datasets, and the results suggest that up to 78.43% accuracy improvement for the HDL-32E can be achieved using the proposed calibration method.
NASA Astrophysics Data System (ADS)
Ala-aho, Pertti; Soulsby, Chris; Wang, Hailong; Tetzlaff, Doerthe
2017-04-01
Understanding the role of groundwater for runoff generation in headwater catchments is a challenge in hydrology, particularly so in data-scarce areas. Fully-integrated surface-subsurface modelling has shown potential in increasing process understanding for runoff generation, but high data requirements and difficulties in model calibration are typically assumed to preclude their use in catchment-scale studies. We used a fully integrated surface-subsurface hydrological simulator to enhance groundwater-related process understanding in a headwater catchment with a rich background in empirical data. To set up the model we used minimal data that could be reasonably expected to exist for any experimental catchment. A novel aspect of our approach was in using simplified model parameterisation and including parameters from all model domains (surface, subsurface, evapotranspiration) in automated model calibration. Calibration aimed not only to improve model fit, but also to test the information content of the observations (streamflow, remotely sensed evapotranspiration, median groundwater level) used in calibration objective functions. We identified sensitive parameters in all model domains (subsurface, surface, evapotranspiration), demonstrating that model calibration should be inclusive of parameters from these different model domains. Incorporating groundwater data in calibration objectives improved the model fit for groundwater levels, but simulations did not reproduce well the remotely sensed evapotranspiration time series even after calibration. Spatially explicit model output improved our understanding of how groundwater functions in maintaining streamflow generation primarily via saturation excess overland flow. Steady groundwater inputs created saturated conditions in the valley bottom riparian peatlands, leading to overland flow even during dry periods. Groundwater on the hillslopes was more dynamic in its response to rainfall, acting to expand the saturated area extent and thereby promoting saturation excess overland flow during rainstorms. Our work shows the potential of using integrated surface-subsurface modelling alongside with rigorous model calibration to better understand and visualise the role of groundwater in runoff generation even with limited datasets.
Mainali, Dipak; Seelenbinder, John
2016-05-01
Quick and presumptive identification of seized drug samples without destroying evidence is necessary for law enforcement officials to control the trafficking and abuse of drugs. This work reports an automated screening method to detect the presence of cocaine in seized samples using portable Fourier transform infrared (FT-IR) spectrometers. The method is based on the identification of well-defined characteristic vibrational frequencies related to the functional group of the cocaine molecule and is fully automated through the use of an expert system. Traditionally, analysts look for key functional group bands in the infrared spectra and characterization of the molecules present is dependent on user interpretation. This implies the need for user expertise, especially in samples that likely are mixtures. As such, this approach is biased and also not suitable for non-experts. The method proposed in this work uses the well-established "center of gravity" peak picking mathematical algorithm and combines it with the conditional reporting feature in MicroLab software to provide an automated method that can be successfully employed by users with varied experience levels. The method reports the confidence level of cocaine present only when a certain number of cocaine related peaks are identified by the automated method. Unlike library search and chemometric methods that are dependent on the library database or the training set samples used to build the calibration model, the proposed method is relatively independent of adulterants and diluents present in the seized mixture. This automated method in combination with a portable FT-IR spectrometer provides law enforcement officials, criminal investigators, or forensic experts a quick field-based prescreening capability for the presence of cocaine in seized drug samples. © The Author(s) 2016.
Ferreira, Vicente; Herrero, Paula; Zapata, Julián; Escudero, Ana
2015-08-14
SPME is extremely sensitive to experimental parameters affecting liquid-gas and gas-solid distribution coefficients. Our aims were to measure the weights of these factors and to design a multivariate strategy based on the addition of a pool of internal standards, to minimize matrix effects. Synthetic but real-like wines containing selected analytes and variable amounts of ethanol, non-volatile constituents and major volatile compounds were prepared following a factorial design. The ANOVA study revealed that even using a strong matrix dilution, matrix effects are important and additive with non-significant interaction effects and that it is the presence of major volatile constituents the most dominant factor. A single internal standard provided a robust calibration for 15 out of 47 analytes. Then, two different multivariate calibration strategies based on Partial Least Square Regression were run in order to build calibration functions based on 13 different internal standards able to cope with matrix effects. The first one is based in the calculation of Multivariate Internal Standards (MIS), linear combinations of the normalized signals of the 13 internal standards, which provide the expected area of a given unit of analyte present in each sample. The second strategy is a direct calibration relating concentration to the 13 relative areas measured in each sample for each analyte. Overall, 47 different compounds can be reliably quantified in a single fully automated method with overall uncertainties better than 15%. Copyright © 2015 Elsevier B.V. All rights reserved.
Photogrammetry in 3d Modelling of Human Bone Structures from Radiographs
NASA Astrophysics Data System (ADS)
Hosseinian, S.; Arefi, H.
2017-05-01
Photogrammetry can have great impact on the success of medical processes for diagnosis, treatment and surgeries. Precise 3D models which can be achieved by photogrammetry improve considerably the results of orthopedic surgeries and processes. Usual 3D imaging techniques, computed tomography (CT) and magnetic resonance imaging (MRI), have some limitations such as being used only in non-weight-bearing positions, costs and high radiation dose(for CT) and limitations of MRI for patients with ferromagnetic implants or objects in their bodies. 3D reconstruction of bony structures from biplanar X-ray images is a reliable and accepted alternative for achieving accurate 3D information with low dose radiation in weight-bearing positions. The information can be obtained from multi-view radiographs by using photogrammetry. The primary step for 3D reconstruction of human bone structure from medical X-ray images is calibration which is done by applying principles of photogrammetry. After the calibration step, 3D reconstruction can be done using efficient methods with different levels of automation. Because of the different nature of X-ray images from optical images, there are distinct challenges in medical applications for calibration step of stereoradiography. In this paper, after demonstrating the general steps and principles of 3D reconstruction from X-ray images, a comparison will be done on calibration methods for 3D reconstruction from radiographs and they are assessed from photogrammetry point of view by considering various metrics such as their camera models, calibration objects, accuracy, availability, patient-friendly and cost.
Modeling Photo-multiplier Gain and Regenerating Pulse Height Data for Application Development
NASA Astrophysics Data System (ADS)
Aspinall, Michael D.; Jones, Ashley R.
2018-01-01
Systems that adopt organic scintillation detector arrays often require a calibration process prior to the intended measurement campaign to correct for significant performance variances between detectors within the array. These differences exist because of low tolerances associated with photo-multiplier tube technology and environmental influences. Differences in detector response can be corrected for by adjusting the supplied photo-multiplier tube voltage to control its gain and the effect that this has on the pulse height spectra from a gamma-only calibration source with a defined photo-peak. Automated methods that analyze these spectra and adjust the photo-multiplier tube bias accordingly are emerging for hardware that integrate acquisition electronics and high voltage control. However, development of such algorithms require access to the hardware, multiple detectors and calibration source for prolonged periods, all with associated constraints and risks. In this work, we report on a software function and related models developed to rescale and regenerate pulse height data acquired from a single scintillation detector. Such a function could be used to generate significant and varied pulse height data that can be used to integration-test algorithms that are capable of automatically response matching multiple detectors using pulse height spectra analysis. Furthermore, a function of this sort removes the dependence on multiple detectors, digital analyzers and calibration source. Results show a good match between the real and regenerated pulse height data. The function has also been used successfully to develop auto-calibration algorithms.
Active point out-of-plane ultrasound calibration
NASA Astrophysics Data System (ADS)
Cheng, Alexis; Guo, Xiaoyu; Zhang, Haichong K.; Kang, Hyunjae; Etienne-Cummings, Ralph; Boctor, Emad M.
2015-03-01
Image-guided surgery systems are often used to provide surgeons with informational support. Due to several unique advantages such as ease of use, real-time image acquisition, and no ionizing radiation, ultrasound is a common intraoperative medical imaging modality used in image-guided surgery systems. To perform advanced forms of guidance with ultrasound, such as virtual image overlays or automated robotic actuation, an ultrasound calibration process must be performed. This process recovers the rigid body transformation between a tracked marker attached to the transducer and the ultrasound image. Point-based phantoms are considered to be accurate, but their calibration framework assumes that the point is in the image plane. In this work, we present the use of an active point phantom and a calibration framework that accounts for the elevational uncertainty of the point. Given the lateral and axial position of the point in the ultrasound image, we approximate a circle in the axial-elevational plane with a radius equal to the axial position. The standard approach transforms all of the imaged points to be a single physical point. In our approach, we minimize the distances between the circular subsets of each image, with them ideally intersecting at a single point. We simulated in noiseless and noisy cases, presenting results on out-of-plane estimation errors, calibration estimation errors, and point reconstruction precision. We also performed an experiment using a robot arm as the tracker, resulting in a point reconstruction precision of 0.64mm.
NASA Astrophysics Data System (ADS)
Ortolano, Gaetano; Visalli, Roberto; Godard, Gaston; Cirrincione, Rosolino
2018-06-01
We present a new ArcGIS®-based tool developed in the Python programming language for calibrating EDS/WDS X-ray element maps, with the aim of acquiring quantitative information of petrological interest. The calibration procedure is based on a multiple linear regression technique that takes into account interdependence among elements and is constrained by the stoichiometry of minerals. The procedure requires an appropriate number of spot analyses for use as internal standards and provides several test indexes for a rapid check of calibration accuracy. The code is based on an earlier image-processing tool designed primarily for classifying minerals in X-ray element maps; the original Python code has now been enhanced to yield calibrated maps of mineral end-members or the chemical parameters of each classified mineral. The semi-automated procedure can be used to extract a dataset that is automatically stored within queryable tables. As a case study, the software was applied to an amphibolite-facies garnet-bearing micaschist. The calibrated images obtained for both anhydrous (i.e., garnet and plagioclase) and hydrous (i.e., biotite) phases show a good fit with corresponding electron microprobe analyses. This new GIS-based tool package can thus find useful application in petrology and materials science research. Moreover, the huge quantity of data extracted opens new opportunities for the development of a thin-section microchemical database that, using a GIS platform, can be linked with other major global geoscience databases.
NASA Astrophysics Data System (ADS)
Baray, J. L.; Fréville, P.; Montoux, N.; Chauvigné, A.; Hadad, D.; Sellegri, K.
2018-04-01
A Rayleigh-Mie-Raman LIDAR provides vertical profiles of tropospheric variables at Clermont-Ferrand (France) since 2008, in order to describe the boundary layer dynamics, tropospheric aerosols, cirrus and water vapor. It is included in the EARLINET network. We performed hardware/software developments in order to upgrade the quality, calibration and improve automation. We present an overview of the system and some examples of measurements and a preliminary geophysical analysis of the data.
2012-06-04
central Tibetan Plateau. Automated hypocenter locations in south- central Tibet were finalized. Refinements included an update of the model used for... central Tibet. A subset of ~7,900 events with 25+ arrivals is considered well-located based on kilometer-scale differences relative to manually located...propagation in the Nepal Himalaya and the south- central Tibetan Plateau. The 2002–2005 experiment consisted of 233 stations along a dense 800 km linear
Note: Digital laser frequency auto-locking for inter-satellite laser ranging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Luo, Yingxin; Yeh, Hsien-Chi, E-mail: yexianji@mail.hust.edu.cn; Li, Hongyin
2016-05-15
We present a prototype of a laser frequency auto-locking and re-locking control system designed for laser frequency stabilization in inter-satellite laser ranging system. The controller has been implemented on field programmable gate arrays and programmed with LabVIEW software. The controller allows initial frequency calibrating and lock-in of a free-running laser to a Fabry-Pérot cavity. Since it allows automatic recovery from unlocked conditions, benefit derives to automated in-orbit operations. Program design and experimental results are demonstrated.
Sensitivity Analysis of an Automated Calibration Routine for Airborne Cameras
2013-03-01
Karl Walli, Lt Col, USAF (Member) Date v AFIT-ENG-13- M -51 Abstract Given a known aircraft...pitch up. 7. Holding Pattern – A standard holding pattern with 30 second straight legs and 180° turns using 30° angle of bank at each end. 8. S ...It can be seen that at noise levels 0 1000 2000 3000 4000 5000 6000 7000 0 1 2 3 4 5 6 7 8 Er ro r ( m et er s ) PIxel Noise Standard
CCFpams: Atmospheric stellar parameters from cross-correlation functions
NASA Astrophysics Data System (ADS)
Malavolta, Luca; Lovis, Christophe; Pepe, Francesco; Sneden, Christopher; Udry, Stephane
2017-07-01
CCFpams allows the measurement of stellar temperature, metallicity and gravity within a few seconds and in a completely automated fashion. Rather than performing comparisons with spectral libraries, the technique is based on the determination of several cross-correlation functions (CCFs) obtained by including spectral features with different sensitivity to the photospheric parameters. Literature stellar parameters of high signal-to-noise (SNR) and high-resolution HARPS spectra of FGK Main Sequence stars are used to calibrate the stellar parameters as a function of CCF areas.
A radar data processing and enhancement system
NASA Technical Reports Server (NTRS)
Anderson, K. F.; Wrin, J. W.; James, R.
1986-01-01
This report describes the space position data processing system of the NASA Western Aeronautical Test Range. The system is installed at the Dryden Flight Research Facility of NASA Ames Research Center. This operational radar data system (RADATS) provides simultaneous data processing for multiple data inputs and tracking and antenna pointing outputs while performing real-time monitoring, control, and data enhancement functions. Experience in support of the space shuttle and aeronautical flight research missions is described, as well as the automated calibration and configuration functions of the system.
1994-12-01
important for improving the TWSTFT capabilities. An automnted system for this purpose has been developed from the initial design at NMi-VSL. It...September 1994 together with a USNO portable station on a calibration trip to European TWSTFT earth staions. 1. Introduction The Two-Way Satellite...Time and Frequency Transfer ( TWSTFT ) method (Fig. I) is used to compare two clocks or time scales which are often located at great distances from each
SWIR calibration of Spectralon reflectance factor
NASA Astrophysics Data System (ADS)
Georgiev, Georgi T.; Butler, James J.; Cooksey, Catherine; Ding, Leibo; Thome, Kurtis J.
2011-11-01
Satellite instruments operating in the reflective solar wavelength region require accurate and precise determination of the Bidirectional Reflectance Factor (BRF) of laboratory-based diffusers used in their pre-flight and on-orbit radiometric calibrations. BRF measurements are required throughout the reflected-solar spectrum from the ultraviolet through the shortwave infrared. Spectralon diffusers are commonly used as a reflectance standard for bidirectional and hemispherical geometries. The Diffuser Calibration Laboratory (DCaL) at NASA's Goddard Space Flight Center is a secondary calibration facility with reflectance measurements traceable to those made by the Spectral Tri-function Automated Reference Reflectometer (STARR) facility at the National Institute of Standards and Technology (NIST). For more than two decades, the DCaL has provided numerous NASA projects with BRF data in the ultraviolet (UV), visible (VIS) and the Near InfraRed (NIR) spectral regions. Presented in this paper are measurements of BRF from 1475 nm to 1625 nm obtained using an indium gallium arsenide detector and a tunable coherent light source. The sample was a 50.8 mm (2 in) diameter, 99% white Spectralon target. The BRF results are discussed and compared to empirically generated data from a model based on NIST certified values of 6°directional-hemispherical spectral reflectance factors from 900 nm to 2500 nm. Employing a new NIST capability for measuring bidirectional reflectance using a cooled, extended InGaAs detector, BRF calibration measurements of the same sample were also made using NIST's STARR from 1475 nm to 1625 nm at an incident angle of 0° and at viewing angle of 45°. The total combined uncertainty for BRF in this ShortWave Infrared (SWIR) range is less than 1%. This measurement capability will evolve into a BRF calibration service in SWIR region in support of NASA remote sensing missions.
NASA Astrophysics Data System (ADS)
Sargent, Dusty; Chen, Chao-I.; Wang, Yuan-Fang
2010-02-01
The paper reports a fully-automated, cross-modality sensor data registration scheme between video and magnetic tracker data. This registration scheme is intended for use in computerized imaging systems to model the appearance, structure, and dimension of human anatomy in three dimensions (3D) from endoscopic videos, particularly colonoscopic videos, for cancer research and clinical practices. The proposed cross-modality calibration procedure operates this way: Before a colonoscopic procedure, the surgeon inserts a magnetic tracker into the working channel of the endoscope or otherwise fixes the tracker's position on the scope. The surgeon then maneuvers the scope-tracker assembly to view a checkerboard calibration pattern from a few different viewpoints for a few seconds. The calibration procedure is then completed, and the relative pose (translation and rotation) between the reference frames of the magnetic tracker and the scope is determined. During the colonoscopic procedure, the readings from the magnetic tracker are used to automatically deduce the pose (both position and orientation) of the scope's reference frame over time, without complicated image analysis. Knowing the scope movement over time then allows us to infer the 3D appearance and structure of the organs and tissues in the scene. While there are other well-established mechanisms for inferring the movement of the camera (scope) from images, they are often sensitive to mistakes in image analysis, error accumulation, and structure deformation. The proposed method using a magnetic tracker to establish the camera motion parameters thus provides a robust and efficient alternative for 3D model construction. Furthermore, the calibration procedure does not require special training nor use expensive calibration equipment (except for a camera calibration pattern-a checkerboard pattern-that can be printed on any laser or inkjet printer).
NASA Technical Reports Server (NTRS)
Czapla-Myers, Jeffrey; Ong, Lawrence; Thome, Kurtis; McCorkel, Joel
2015-01-01
The Earth-Observing One (EO-1) satellite was launched in 2000. Radiometric calibration of Hyperion and the Advanced Land Imager (ALI) has been performed throughout the mission lifetime using various techniques that include ground-based vicarious calibration, pseudo-invariant calibration sites, and also the moon. The EO-1 mission is nearing its useful lifetime, and this work seeks to validate the radiometric calibration of Hyperion and ALI from 2013 until the satellite is decommissioned. Hyperion and ALI have been routinely collecting data at the automated Radiometric Calibration Test Site [RadCaTS/Railroad Valley (RRV)] since launch. In support of this study, the frequency of the acquisitions at RadCaTS has been significantly increased since 2013, which provides an opportunity to analyze the radiometric stability and accuracy during the final stages of the EO-1 mission. The analysis of Hyperion and ALI is performed using a suite of ground instrumentation that measures the atmosphere and surface throughout the day. The final product is an estimate of the top-of-atmosphere (TOA) spectral radiance, which is compared to Hyperion and ALI radiances. The results show that Hyperion agrees with the RadCaTS predictions to within 5% in the visible and near-infrared (VNIR) and to within 10% in the shortwave infrared (SWIR). The 2013-2014 ALI results show agreement to within 6% in the VNIR and 7.5% in the SWIR bands. A cross comparison between ALI and the Operational Land Imager (OLI) using RadCaTS as a transfer source shows agreement of 3%-6% during the period of 2013-2014.
Automated aerial image based CD metrology initiated by pattern marking with photomask layout data
NASA Astrophysics Data System (ADS)
Davis, Grant; Choi, Sun Young; Jung, Eui Hee; Seyfarth, Arne; van Doornmalen, Hans; Poortinga, Eric
2007-05-01
The photomask is a critical element in the lithographic image transfer process from the drawn layout to the final structures on the wafer. The non-linearity of the imaging process and the related MEEF impose a tight control requirement on the photomask critical dimensions. Critical dimensions can be measured in aerial images with hardware emulation. This is a more recent complement to the standard scanning electron microscope measurement of wafers and photomasks. Aerial image measurement includes non-linear, 3-dimensional, and materials effects on imaging that cannot be observed directly by SEM measurement of the mask. Aerial image measurement excludes the processing effects of printing and etching on the wafer. This presents a unique contribution to the difficult process control and modeling tasks in mask making. In the past, aerial image measurements have been used mainly to characterize the printability of mask repair sites. Development of photomask CD characterization with the AIMS TM tool was motivated by the benefit of MEEF sensitivity and the shorter feedback loop compared to wafer exposures. This paper describes a new application that includes: an improved interface for the selection of meaningful locations using the photomask and design layout data with the Calibre TM Metrology Interface, an automated recipe generation process, an automated measurement process, and automated analysis and result reporting on a Carl Zeiss AIMS TM system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Egorov, Oleg; O'Hara, Matthew J.; Grate, Jay W.
An automated fluidic instrument is described that rapidly determines the total 99Tc content of aged nuclear waste samples, where the matrix is chemically and radiologically complex and the existing speciation of the 99Tc is variable. The monitor links microwave-assisted sample preparation with an automated anion exchange column separation and detection using a flow-through solid scintillator detector. The sample preparation steps acidify the sample, decompose organics, and convert all Tc species to the pertechnetate anion. The column-based anion exchange procedure separates the pertechnetate from the complex sample matrix, so that radiometric detection can provide accurate measurement of 99Tc. We developed amore » preprogrammed spike addition procedure to automatically determine matrix-matched calibration. The overall measurement efficiency that is determined simultaneously provides a self-diagnostic parameter for the radiochemical separation and overall instrument function. Continuous, automated operation was demonstrated over the course of 54 h, which resulted in the analysis of 215 samples plus 54 hly spike-addition samples, with consistent overall measurement efficiency for the operation of the monitor. A sample can be processed and measured automatically in just 12.5 min with a detection limit of 23.5 Bq/mL of 99Tc in low activity waste (0.495 mL sample volume), with better than 10% RSD precision at concentrations above the quantification limit. This rapid automated analysis method was developed to support nuclear waste processing operations planned for the Hanford nuclear site.« less
A real-time automated quality control of rain gauge data based on multiple sensors
NASA Astrophysics Data System (ADS)
qi, Y.; Zhang, J.
2013-12-01
Precipitation is one of the most important meteorological and hydrological variables. Automated rain gauge networks provide direct measurements of precipitation and have been used for numerous applications such as generating regional and national precipitation maps, calibrating remote sensing data, and validating hydrological and meteorological model predictions. Automated gauge observations are prone to a variety of error sources (instrument malfunction, transmission errors, format changes), and require careful quality controls (QC). Many previous gauge QC techniques were based on neighborhood checks within the gauge network itself and the effectiveness is dependent on gauge densities and precipitation regimes. The current study takes advantage of the multi-sensor data sources in the National Mosaic and Multi-Sensor QPE (NMQ/Q2) system and developes an automated gauge QC scheme based the consistency of radar hourly QPEs and gauge observations. Error characteristics of radar and gauge as a function of the radar sampling geometry, precipitation regimes, and the freezing level height are considered. The new scheme was evaluated by comparing an NMQ national gauge-based precipitation product with independent manual gauge observations. Twelve heavy rainfall events from different seasons and areas of the United States are selected for the evaluation, and the results show that the new NMQ product with QC'ed gauges has a more physically spatial distribution than the old product. And the new product agrees much better statistically with the independent gauges.
Semi-automated Image Processing for Preclinical Bioluminescent Imaging.
Slavine, Nikolai V; McColl, Roderick W
Bioluminescent imaging is a valuable noninvasive technique for investigating tumor dynamics and specific biological molecular events in living animals to better understand the effects of human disease in animal models. The purpose of this study was to develop and test a strategy behind automated methods for bioluminescence image processing from the data acquisition to obtaining 3D images. In order to optimize this procedure a semi-automated image processing approach with multi-modality image handling environment was developed. To identify a bioluminescent source location and strength we used the light flux detected on the surface of the imaged object by CCD cameras. For phantom calibration tests and object surface reconstruction we used MLEM algorithm. For internal bioluminescent sources we used the diffusion approximation with balancing the internal and external intensities on the boundary of the media and then determined an initial order approximation for the photon fluence we subsequently applied a novel iterative deconvolution method to obtain the final reconstruction result. We find that the reconstruction techniques successfully used the depth-dependent light transport approach and semi-automated image processing to provide a realistic 3D model of the lung tumor. Our image processing software can optimize and decrease the time of the volumetric imaging and quantitative assessment. The data obtained from light phantom and lung mouse tumor images demonstrate the utility of the image reconstruction algorithms and semi-automated approach for bioluminescent image processing procedure. We suggest that the developed image processing approach can be applied to preclinical imaging studies: characteristics of tumor growth, identify metastases, and potentially determine the effectiveness of cancer treatment.
A new automated colorimetric method for measuring total oxidant status.
Erel, Ozcan
2005-12-01
To develop a new, colorimetric and automated method for measuring total oxidation status (TOS). The assay is based on the oxidation of ferrous ion to ferric ion in the presence of various oxidant species in acidic medium and the measurement of the ferric ion by xylenol orange. The oxidation reaction of the assay was enhanced and precipitation of proteins was prevented. In addition, autoxidation of ferrous ion present in the reagent was prevented during storage. The method was applied to an automated analyzer, which was calibrated with hydrogen peroxide and the analytical performance characteristics of the assay were determined. There were important correlations with hydrogen peroxide, tert-butyl hydroperoxide and cumene hydroperoxide solutions (r=0.99, P<0.001 for all). In addition, the new assay presented a typical sigmoidal reaction pattern in copper-induced lipoprotein autoxidation. The novel assay is linear up to 200 micromol H2O2 Equiv./L and its precision value is lower than 3%. The lower detection limit is 1.13 micromol H2O2 Equiv./L. The reagents are stable for at least 6 months on the automated analyzer. Serum TOS level was significantly higher in patients with osteoarthritis (21.23+/-3.11 micromol H2O2 Equiv./L) than in healthy subjects (14.19+/-3.16 micromol H2O2 Equiv./L, P<0.001) and the results showed a significant negative correlation with total antioxidant capacity (TAC) (r=-0.66 P<0.01). This easy, stable, reliable, sensitive, inexpensive and fully automated method that is described can be used to measure total oxidant status.
Automated solid-phase extraction workstations combined with quantitative bioanalytical LC/MS.
Huang, N H; Kagel, J R; Rossi, D T
1999-03-01
An automated solid-phase extraction workstation was used to develop, characterize and validate an LC/MS/MS method for quantifying a novel lipid-regulating drug in dog plasma. Method development was facilitated by workstation functions that allowed wash solvents of varying organic composition to be mixed and tested automatically. Precision estimates for this approach were within 9.8% relative standard deviation (RSD) across the calibration range. Accuracy for replicate determinations of quality controls was between -7.2 and +6.2% relative error (RE) over 5-1,000 ng/ml(-1). Recoveries were evaluated for a wide variety of wash solvents, elution solvents and sorbents. Optimized recoveries were generally > 95%. A sample throughput benchmark for the method was approximately equal 8 min per sample. Because of parallel sample processing, 100 samples were extracted in less than 120 min. The approach has proven useful for use with LC/MS/MS, using a multiple reaction monitoring (MRM) approach.
Automated feature detection and identification in digital point-ordered signals
Oppenlander, Jane E.; Loomis, Kent C.; Brudnoy, David M.; Levy, Arthur J.
1998-01-01
A computer-based automated method to detect and identify features in digital point-ordered signals. The method is used for processing of non-destructive test signals, such as eddy current signals obtained from calibration standards. The signals are first automatically processed to remove noise and to determine a baseline. Next, features are detected in the signals using mathematical morphology filters. Finally, verification of the features is made using an expert system of pattern recognition methods and geometric criteria. The method has the advantage that standard features can be, located without prior knowledge of the number or sequence of the features. Further advantages are that standard features can be differentiated from irrelevant signal features such as noise, and detected features are automatically verified by parameters extracted from the signals. The method proceeds fully automatically without initial operator set-up and without subjective operator feature judgement.
Real-time control of the robotic lunar observatory telescope
Anderson, J.M.; Becker, K.J.; Kieffer, H.H.; Dodd, D.N.
1999-01-01
The US Geological Survey operates an automated observatory dedicated to the radiometry of the Moon with the objective of developing a multispectral, spatially resolved photometric model of the Moon to be used in the calibration of Earth-orbiting spacecraft. Interference filters are used with two imaging instruments to observe the Moon in 32 passbands from 350-2500 nm. Three computers control the telescope mount and instruments with a fourth computer acting as a master system to control all observation activities. Real-time control software has been written to operate the instrumentation and to automate the observing process. The observing software algorithms use information including the positions of objects in the sky, the phase of the Moon, and the times of evening and morning twilight to decide how to observe program objects. The observatory has been operating in a routine mode since late 1995 and is expected to continue through at least 2002 without significant modifications.
NASA Astrophysics Data System (ADS)
Hoegger, B.; Levrat, G.; Staehelin, J.; Schill, H.; Ribordy, P.
1992-05-01
Recent improvements of the instrumentation at the LKO (Light Climatic Observatory - Ozone measuring station of the Swiss Meteorological Institute) are described. These improvements of the station at Arosa (Switzerland) include the construction of a 'spectrodome' (cabin for convenient operation of two Dobson spectrophotometers), partial automation of the two Dobson spectrophotometers D15 and D101 operated side by side (automatic data transmission to a PC), the complete automation of instrument D51 to perform Umkehr measurements, and the purchase of two Brewer spectrophotometers (Br40 and Br72). On the basis of digital data acquisition, all calculations to get the final results of the total amount of ozone are performed on PC. A data quality concept under current development is described. Its aim is to compare the consistency of the different quasi-simultaneous measurements and to identify possible drifts in the calibration of the instruments at an early stage.
Lower extremity EMG-driven modeling of walking with automated adjustment of musculoskeletal geometry
Meyer, Andrew J.; Patten, Carolynn
2017-01-01
Neuromusculoskeletal disorders affecting walking ability are often difficult to manage, in part due to limited understanding of how a patient’s lower extremity muscle excitations contribute to the patient’s lower extremity joint moments. To assist in the study of these disorders, researchers have developed electromyography (EMG) driven neuromusculoskeletal models utilizing scaled generic musculoskeletal geometry. While these models can predict individual muscle contributions to lower extremity joint moments during walking, the accuracy of the predictions can be hindered by errors in the scaled geometry. This study presents a novel EMG-driven modeling method that automatically adjusts surrogate representations of the patient’s musculoskeletal geometry to improve prediction of lower extremity joint moments during walking. In addition to commonly adjusted neuromusculoskeletal model parameters, the proposed method adjusts model parameters defining muscle-tendon lengths, velocities, and moment arms. We evaluated our EMG-driven modeling method using data collected from a high-functioning hemiparetic subject walking on an instrumented treadmill at speeds ranging from 0.4 to 0.8 m/s. EMG-driven model parameter values were calibrated to match inverse dynamic moments for five degrees of freedom in each leg while keeping musculoskeletal geometry close to that of an initial scaled musculoskeletal model. We found that our EMG-driven modeling method incorporating automated adjustment of musculoskeletal geometry predicted net joint moments during walking more accurately than did the same method without geometric adjustments. Geometric adjustments improved moment prediction errors by 25% on average and up to 52%, with the largest improvements occurring at the hip. Predicted adjustments to musculoskeletal geometry were comparable to errors reported in the literature between scaled generic geometric models and measurements made from imaging data. Our results demonstrate that with appropriate experimental data, joint moment predictions for walking generated by an EMG-driven model can be improved significantly when automated adjustment of musculoskeletal geometry is included in the model calibration process. PMID:28700708
Automated Smartphone Threshold Audiometry: Validity and Time Efficiency.
van Tonder, Jessica; Swanepoel, De Wet; Mahomed-Asmail, Faheema; Myburgh, Hermanus; Eikelboom, Robert H
2017-03-01
Smartphone-based threshold audiometry with automated testing has the potential to provide affordable access to audiometry in underserved contexts. To validate the threshold version (hearTest) of the validated hearScreen™ smartphone-based application using inexpensive smartphones (Android operating system) and calibrated supra-aural headphones. A repeated measures within-participant study design was employed to compare air-conduction thresholds (0.5-8 kHz) obtained through automated smartphone audiometry to thresholds obtained through conventional audiometry. A total of 95 participants were included in the study. Of these, 30 were adults, who had known bilateral hearing losses of varying degrees (mean age = 59 yr, standard deviation [SD] = 21.8; 56.7% female), and 65 were adolescents (mean age = 16.5 yr, SD = 1.2; 70.8% female), of which 61 had normal hearing and the remaining 4 had mild hearing losses. Threshold comparisons were made between the two test procedures. The Wilcoxon signed-ranked test was used for comparison of threshold correspondence between manual and smartphone thresholds and the paired samples t test was used to compare test time. Within the adult sample, 94.4% of thresholds obtained through smartphone and conventional audiometry corresponded within 10 dB or less. There was no significant difference between smartphone (6.75-min average, SD = 1.5) and conventional audiometry test duration (6.65-min average, SD = 2.5). Within the adolescent sample, 84.7% of thresholds obtained at 0.5, 2, and 4 kHz with hearTest and conventional audiometry corresponded within ≤5 dB. At 1 kHz, 79.3% of the thresholds differed by ≤10 dB. There was a significant difference (p < 0.01) between smartphone (7.09 min, SD = 1.2) and conventional audiometry test duration (3.23 min, SD = 0.6). The hearTest application with calibrated supra-aural headphones provides a cost-effective option to determine valid air-conduction hearing thresholds. American Academy of Audiology
Eide, Ingvar; Westad, Frank
2018-01-01
A pilot study demonstrating real-time environmental monitoring with automated multivariate analysis of multi-sensor data submitted online has been performed at the cabled LoVe Ocean Observatory located at 258 m depth 20 km off the coast of Lofoten-Vesterålen, Norway. The major purpose was efficient monitoring of many variables simultaneously and early detection of changes and time-trends in the overall response pattern before changes were evident in individual variables. The pilot study was performed with 12 sensors from May 16 to August 31, 2015. The sensors provided data for chlorophyll, turbidity, conductivity, temperature (three sensors), salinity (calculated from temperature and conductivity), biomass at three different depth intervals (5-50, 50-120, 120-250 m), and current speed measured in two directions (east and north) using two sensors covering different depths with overlap. A total of 88 variables were monitored, 78 from the two current speed sensors. The time-resolution varied, thus the data had to be aligned to a common time resolution. After alignment, the data were interpreted using principal component analysis (PCA). Initially, a calibration model was established using data from May 16 to July 31. The data on current speed from two sensors were subject to two separate PCA models and the score vectors from these two models were combined with the other 10 variables in a multi-block PCA model. The observations from August were projected on the calibration model consecutively one at a time and the result was visualized in a score plot. Automated PCA of multi-sensor data submitted online is illustrated with an attached time-lapse video covering the relative short time period used in the pilot study. Methods for statistical validation, and warning and alarm limits are described. Redundant sensors enable sensor diagnostics and quality assurance. In a future perspective, the concept may be used in integrated environmental monitoring.
Easy Leaf Area: Automated digital image analysis for rapid and accurate measurement of leaf area.
Easlon, Hsien Ming; Bloom, Arnold J
2014-07-01
Measurement of leaf areas from digital photographs has traditionally required significant user input unless backgrounds are carefully masked. Easy Leaf Area was developed to batch process hundreds of Arabidopsis rosette images in minutes, removing background artifacts and saving results to a spreadsheet-ready CSV file. • Easy Leaf Area uses the color ratios of each pixel to distinguish leaves and calibration areas from their background and compares leaf pixel counts to a red calibration area to eliminate the need for camera distance calculations or manual ruler scale measurement that other software methods typically require. Leaf areas estimated by this software from images taken with a camera phone were more accurate than ImageJ estimates from flatbed scanner images. • Easy Leaf Area provides an easy-to-use method for rapid measurement of leaf area and nondestructive estimation of canopy area from digital images.
Cloud cover determination in polar regions from satellite imagery
NASA Technical Reports Server (NTRS)
Barry, R. G.; Maslanik, J. A.; Key, J. R.
1987-01-01
A definition is undertaken of the spectral and spatial characteristics of clouds and surface conditions in the polar regions, and to the creation of calibrated, geometrically correct data sets suitable for quantitative analysis. Ways are explored in which this information can be applied to cloud classifications as new methods or as extensions to existing classification schemes. A methodology is developed that uses automated techniques to merge Advanced Very High Resolution Radiometer (AVHRR) and Scanning Multichannel Microwave Radiometer (SMMR) data, and to apply first-order calibration and zenith angle corrections to the AVHRR imagery. Cloud cover and surface types are manually interpreted, and manual methods are used to define relatively pure training areas to describe the textural and multispectral characteristics of clouds over several surface conditions. The effects of viewing angle and bidirectional reflectance differences are studied for several classes, and the effectiveness of some key components of existing classification schemes is tested.
NASA Astrophysics Data System (ADS)
Lock, Jacobus C.; Smit, Willie J.; Treurnicht, Johann
2016-05-01
The Solar Thermal Energy Research Group (STERG) is investigating ways to make heliostats cheaper to reduce the total cost of a concentrating solar power (CSP) plant. One avenue of research is to use unmanned aerial vehicles (UAVs) to automate and assist with the heliostat calibration process. To do this, the pose estimation error of each UAV must be determined and integrated into a calibration procedure. A computer vision (CV) system is used to measure the pose of a quadcopter UAV. However, this CV system contains considerable measurement errors. Since this is a high-dimensional problem, a sophisticated prediction model must be used to estimate the measurement error of the CV system for any given pose measurement vector. This paper attempts to train and validate such a model with the aim of using it to determine the pose error of a quadcopter in a CSP plant setting.
NASA Technical Reports Server (NTRS)
Ketchum, E.
1988-01-01
The Goddard Space Flight Center (GSFC) Flight Dynamics Division (FDD) will be responsible for performing ground attitude determination for Gamma Ray Observatory (GRO) support. The study reported in this paper provides the FDD and the GRO project with ground attitude determination error information and illustrates several uses of the Generalized Calibration System (GCS). GCS, an institutional software tool in the FDD, automates the computation of the expected attitude determination uncertainty that a spacecraft will encounter during its mission. The GRO project is particularly interested in the uncertainty in the attitude determination using Sun sensors and a magnetometer when both star trackers are inoperable. In order to examine the expected attitude errors for GRO, a systematic approach was developed including various parametric studies. The approach identifies pertinent parameters and combines them to form a matrix of test runs in GCS. This matrix formed the basis for this study.
Yasui, Yutaka; McLerran, Dale; Adam, Bao-Ling; Winget, Marcy; Thornquist, Mark; Feng, Ziding
2003-01-01
Discovery of "signature" protein profiles that distinguish disease states (eg, malignant, benign, and normal) is a key step towards translating recent advancements in proteomic technologies into clinical utilities. Protein data generated from mass spectrometers are, however, large in size and have complex features due to complexities in both biological specimens and interfering biochemical/physical processes of the measurement procedure. Making sense out of such high-dimensional complex data is challenging and necessitates the use of a systematic data analytic strategy. We propose here a data processing strategy for two major issues in the analysis of such mass-spectrometry-generated proteomic data: (1) separation of protein "signals" from background "noise" in protein intensity measurements and (2) calibration of protein mass/charge measurements across samples. We illustrate the two issues and the utility of the proposed strategy using data from a prostate cancer biomarker discovery project as an example.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lombigit, L., E-mail: lojius@nm.gov.my; Yussup, N., E-mail: nolida@nm.gov.my; Ibrahim, Maslina Mohd
A digital n/γ pulse shape discrimination (PSD) system is currently under development at Instrumentation and Automation Centre, Malaysian Nuclear Agency. This system aims at simultaneous detection of fast neutron and gamma ray in mixed radiations environment. This work reports the system characterization performed on the liquid scintillation detector (BC-501A) and digital pulse shape discrimination (DPSD) system. The characterization involves measurement of electron light output from the BC-501A detector and energy channels calibration of the pulse height spectra acquired with DPSD system using set of photon reference sources. The main goal of this experiment is to calibrate the ADC channel ofmore » our DPSD system, characterized the BC-501 detector and find the position of Compton edge which later could be used as threshold for the n/γ PSD experiment. The detector resolution however is worse as compared to other published data but it is expected as our detector has a smaller active volume.« less
NASA Astrophysics Data System (ADS)
Jenness, T.; Robson, E. I.; Stevens, J. A.
2010-01-01
Calibrated data for 143 flat-spectrum extragalactic radio sources are presented at a wavelength of 850μm covering a 5-yr period from 2000 April. The data, obtained at the James Clerk Maxwell Telescope using the Submillimetre Common-User Bolometer Array (SCUBA) camera in pointing mode, were analysed using an automated pipeline process based on the Observatory Reduction and Acquisition Control - Data Reduction (ORAC-DR) system. This paper describes the techniques used to analyse and calibrate the data, and presents the data base of results along with a representative sample of the better-sampled light curves. A re-analysis of previously published data from 1997 to 2000 is also presented. The combined catalogue, comprising 10493 flux density measurements, provides a unique and valuable resource for studies of extragalactic radio sources.
The Architecture Design of Detection and Calibration System for High-voltage Electrical Equipment
NASA Astrophysics Data System (ADS)
Ma, Y.; Lin, Y.; Yang, Y.; Gu, Ch; Yang, F.; Zou, L. D.
2018-01-01
With the construction of Material Quality Inspection Center of Shandong electric power company, Electric Power Research Institute takes on more jobs on quality analysis and laboratory calibration for high-voltage electrical equipment, and informationization construction becomes urgent. In the paper we design a consolidated system, which implements the electronic management and online automation process for material sampling, test apparatus detection and field test. In the three jobs we use QR code scanning, online Word editing and electronic signature. These techniques simplify the complex process of warehouse management and testing report transferring, and largely reduce the manual procedure. The construction of the standardized detection information platform realizes the integrated management of high-voltage electrical equipment from their networking, running to periodic detection. According to system operation evaluation, the speed of transferring report is doubled, and querying data is also easier and faster.
Kolocouri, Filomila; Dotsikas, Yannis; Apostolou, Constantinos; Kousoulos, Constantinos; Soumelas, Georgios-Stefanos; Loukas, Yannis L
2011-01-01
An HPLC/MS/MS method characterized by complete automation and high throughput was developed for the determination of cilazapril and its active metabolite cilazaprilat in human plasma. All sample preparation and analysis steps were performed by using 2.2 mL 96 deep-well plates, while robotic liquid handling workstations were utilized for all liquid transfer steps, including liquid-liquid extraction. The whole procedure was very fast compared to a manual procedure with vials and no automation. The method also had a very short chromatographic run time of 1.5 min. Sample analysis was performed by RP-HPLC/MS/MS with positive electrospray ionization using multiple reaction monitoring. The calibration curve was linear in the range of 0.500-300 and 0.250-150 ng/mL for cilazapril and cilazaprilat, respectively. The proposed method was fully validated and proved to be selective, accurate, precise, reproducible, and suitable for the determination of cilazapril and cilazaprilat in human plasma. Therefore, it was applied to a bioequivalence study after per os administration of 2.5 mg tablet formulations of cilazapril.
NASA Astrophysics Data System (ADS)
Michalik-Onichimowska, Aleksandra; Kern, Simon; Riedel, Jens; Panne, Ulrich; King, Rudibert; Maiwald, Michael
2017-04-01
Driven mostly by the search for chemical syntheses under biocompatible conditions, so called "click" chemistry rapidly became a growing field of research. The resulting simple one-pot reactions are so far only scarcely accompanied by an adequate optimization via comparably straightforward and robust analysis techniques possessing short set-up times. Here, we report on a fast and reliable calibration-free online NMR monitoring approach for technical mixtures. It combines a versatile fluidic system, continuous-flow measurement of 1H spectra with a time interval of 20 s per spectrum, and a robust, fully automated algorithm to interpret the obtained data. As a proof-of-concept, the thiol-ene coupling between N-boc cysteine methyl ester and allyl alcohol was conducted in a variety of non-deuterated solvents while its time-resolved behaviour was characterized with step tracer experiments. Overlapping signals in online spectra during thiol-ene coupling could be deconvoluted with a spectral model using indirect hard modeling and were subsequently converted to either molar ratios (using a calibration-free approach) or absolute concentrations (using 1-point calibration). For various solvents the kinetic constant k for pseudo-first order reaction was estimated to be 3.9 h-1 at 25 °C. The obtained results were compared with direct integration of non-overlapping signals and showed good agreement with the implemented mass balance.
Evaluation and Comparison of Methods for Measuring Ozone ...
Ambient evaluations of the various ozone and NO2 methods were conducted during field intensive studies as part of the NASA DISCOVER-AQ project conducted during July 2011 near Baltimore, MD; January – February 2013 in the San Juaquin valley, CA; September 2013 in Houston, TX; and July – August 2014 near Denver, CO. During field intensive studies, instruments were calibrated according to manufacturers’ operation manuals and in accordance with FRM requirements listed in 40 CFR 50. During the ambient evaluation campaigns, nightly automated zero and span checks were performed to monitor the validity of the calibration and control for drifts or variations in the span and/or zero response. Both the calibration gas concentrations and the nightly zero and span gas concentrations were delivered using a dynamic dilution calibration system (T700U/T701H, Teledyne API). The analyzers were housed within a temperature-controlled shelter during the sampling campaigns. A glass inlet with sampling height located approximately 5 m above ground level and a subsequent sampling manifold were shared by all instruments. Data generated by all analyzers were collected and logged using a field deployable data acquisition system (Envidas Ultimate). A summary of instruments used during DISCOVER-AQ deployment are listed in Table 1. Figure 1 shows a typical DISCOVER-AQ site (Houston 2013) where EPA (and others) instrumentation was deployed. Under the Clean Air Act, the U.S. EPA has estab
The influence of the in situ camera calibration for direct georeferencing of aerial imagery
NASA Astrophysics Data System (ADS)
Mitishita, E.; Barrios, R.; Centeno, J.
2014-11-01
The direct determination of exterior orientation parameters (EOPs) of aerial images via GNSS/INS technologies is an essential prerequisite in photogrammetric mapping nowadays. Although direct sensor orientation technologies provide a high degree of automation in the process due to the GNSS/INS technologies, the accuracies of the obtained results depend on the quality of a group of parameters that models accurately the conditions of the system at the moment the job is performed. One sub-group of parameters (lever arm offsets and boresight misalignments) models the position and orientation of the sensors with respect to the IMU body frame due to the impossibility of having all sensors on the same position and orientation in the airborne platform. Another sub-group of parameters models the internal characteristics of the sensor (IOP). A system calibration procedure has been recommended by worldwide studies to obtain accurate parameters (mounting and sensor characteristics) for applications of the direct sensor orientation. Commonly, mounting and sensor characteristics are not stable; they can vary in different flight conditions. The system calibration requires a geometric arrangement of the flight and/or control points to decouple correlated parameters, which are not available in the conventional photogrammetric flight. Considering this difficulty, this study investigates the feasibility of the in situ camera calibration to improve the accuracy of the direct georeferencing of aerial images. The camera calibration uses a minimum image block, extracted from the conventional photogrammetric flight, and control point arrangement. A digital Vexcel UltraCam XP camera connected to POS AV TM system was used to get two photogrammetric image blocks. The blocks have different flight directions and opposite flight line. In situ calibration procedures to compute different sets of IOPs are performed and their results are analyzed and used in photogrammetric experiments. The IOPs from the in situ camera calibration improve significantly the accuracies of the direct georeferencing. The obtained results from the experiments are shown and discussed.
NASA Astrophysics Data System (ADS)
Luo, L.
2011-12-01
Automated calibration of complex deterministic water quality models with a large number of biogeochemical parameters can reduce time-consuming iterative simulations involving empirical judgements of model fit. We undertook auto-calibration of the one-dimensional hydrodynamic-ecological lake model DYRESM-CAEDYM, using a Monte Carlo sampling (MCS) method, in order to test the applicability of this procedure for shallow, polymictic Lake Rotorua (New Zealand). The calibration procedure involved independently minimising the root-mean-square-error (RMSE), maximizing the Pearson correlation coefficient (r) and Nash-Sutcliffe efficient coefficient (Nr) for comparisons of model state variables against measured data. An assigned number of parameter permutations was used for 10,000 simulation iterations. The 'optimal' temperature calibration produced a RMSE of 0.54 °C, Nr-value of 0.99 and r-value of 0.98 through the whole water column based on comparisons with 540 observed water temperatures collected between 13 July 2007 - 13 January 2009. The modeled bottom dissolved oxygen concentration (20.5 m below surface) was compared with 467 available observations. The calculated RMSE of the simulations compared with the measurements was 1.78 mg L-1, the Nr-value was 0.75 and the r-value was 0.87. The autocalibrated model was further tested for an independent data set by simulating bottom-water hypoxia events for the period 15 January 2009 to 8 June 2011 (875 days). This verification produced an accurate simulation of five hypoxic events corresponding to DO < 2 mg L-1 during summer of 2009-2011. The RMSE was 2.07 mg L-1, Nr-value 0.62 and r-value of 0.81, based on the available data set of 738 days. The auto-calibration software of DYRESM-CAEDYM developed here is substantially less time-consuming and more efficient in parameter optimisation than traditional manual calibration which has been the standard tool practiced for similar complex water quality models.
jasonSWIR Calibration of Spectralon Reflectance Factor
NASA Technical Reports Server (NTRS)
Georgiev, Georgi T.; Butler, James J.; Cooksey, Cahterine; Ding, Leibo; Thome, Kurtis J.
2011-01-01
Satellite instruments operating in the reflective solar wavelength region require accurate and precise determination of the Bidirectional Reflectance Factor (BRF) of laboratory-based diffusers used in their pre-flight and on-orbit radiometric calibrations. BRF measurements are required throughout the reflected-solar spectrum from the ultraviolet through the shortwave infrared. Spectralon diffusers are commonly used as a reflectance standard for bidirectional and hemispherical geometries. The Diffuser Calibration Laboratory (DCaL) at NASA's Goddard Space Flight Center is a secondary calibration facility with reflectance measurements traceable to those made by the Spectral Tri-function Automated Reference Reflectometer (STARR) facility at the National Institute of Standards and Technology (NIST). For more than two decades, the DCaL has provided numerous NASA projects with BRF data in the ultraviolet (UV), visible (VIS) and the Near infraRed (NIR) spectral regions. Presented in this paper are measurements of BRF from 1475nm to 1625nm obtained using an indium gallium arsenide detector and a tunable coherent light source. The sample was a 2 inch diameter, 99% white Spectralon target. The BRF results are discussed and compared to empirically generated data from a model based on NIST certified values of 6deg directional/hemispherical spectral reflectance factors from 900nm to 2500nm. Employing a new NIST capability for measuring bidirectional reflectance using a cooled, extended InGaAs detector, BRF calibration measurements of the same sample were also made using NIST's STARR from 1475nm to 1625nm at an incident angle of 0deg and at viewing angles of 40deg, 45deg, and 50deg. The total combined uncertainty for BRF in this ShortWave Infrared (SWIR) range is less than 1%. This measurement capability will evolve into a BRF calibration service in SWIR region in support of NASA remote sensing missions. Keywords: BRF, BRDF, Calibration, Spectralon, Reflectance, Remote Sensing.
Small Imaging Depth LIDAR and DCNN-Based Localization for Automated Guided Vehicle †
Ito, Seigo; Hiratsuka, Shigeyoshi; Ohta, Mitsuhiko; Matsubara, Hiroyuki; Ogawa, Masaru
2018-01-01
We present our third prototype sensor and a localization method for Automated Guided Vehicles (AGVs), for which small imaging LIght Detection and Ranging (LIDAR) and fusion-based localization are fundamentally important. Our small imaging LIDAR, named the Single-Photon Avalanche Diode (SPAD) LIDAR, uses a time-of-flight method and SPAD arrays. A SPAD is a highly sensitive photodetector capable of detecting at the single-photon level, and the SPAD LIDAR has two SPAD arrays on the same chip for detection of laser light and environmental light. Therefore, the SPAD LIDAR simultaneously outputs range image data and monocular image data with the same coordinate system and does not require external calibration among outputs. As AGVs travel both indoors and outdoors with vibration, this calibration-less structure is particularly useful for AGV applications. We also introduce a fusion-based localization method, named SPAD DCNN, which uses the SPAD LIDAR and employs a Deep Convolutional Neural Network (DCNN). SPAD DCNN can fuse the outputs of the SPAD LIDAR: range image data, monocular image data and peak intensity image data. The SPAD DCNN has two outputs: the regression result of the position of the SPAD LIDAR and the classification result of the existence of a target to be approached. Our third prototype sensor and the localization method are evaluated in an indoor environment by assuming various AGV trajectories. The results show that the sensor and localization method improve the localization accuracy. PMID:29320434
Small Imaging Depth LIDAR and DCNN-Based Localization for Automated Guided Vehicle.
Ito, Seigo; Hiratsuka, Shigeyoshi; Ohta, Mitsuhiko; Matsubara, Hiroyuki; Ogawa, Masaru
2018-01-10
We present our third prototype sensor and a localization method for Automated Guided Vehicles (AGVs), for which small imaging LIght Detection and Ranging (LIDAR) and fusion-based localization are fundamentally important. Our small imaging LIDAR, named the Single-Photon Avalanche Diode (SPAD) LIDAR, uses a time-of-flight method and SPAD arrays. A SPAD is a highly sensitive photodetector capable of detecting at the single-photon level, and the SPAD LIDAR has two SPAD arrays on the same chip for detection of laser light and environmental light. Therefore, the SPAD LIDAR simultaneously outputs range image data and monocular image data with the same coordinate system and does not require external calibration among outputs. As AGVs travel both indoors and outdoors with vibration, this calibration-less structure is particularly useful for AGV applications. We also introduce a fusion-based localization method, named SPAD DCNN, which uses the SPAD LIDAR and employs a Deep Convolutional Neural Network (DCNN). SPAD DCNN can fuse the outputs of the SPAD LIDAR: range image data, monocular image data and peak intensity image data. The SPAD DCNN has two outputs: the regression result of the position of the SPAD LIDAR and the classification result of the existence of a target to be approached. Our third prototype sensor and the localization method are evaluated in an indoor environment by assuming various AGV trajectories. The results show that the sensor and localization method improve the localization accuracy.
Automatic Camera Calibration for Cultural Heritage Applications Using Unstructured Planar Objects
NASA Astrophysics Data System (ADS)
Adam, K.; Kalisperakis, I.; Grammatikopoulos, L.; Karras, G.; Petsa, E.
2013-07-01
As a rule, image-based documentation of cultural heritage relies today on ordinary digital cameras and commercial software. As such projects often involve researchers not familiar with photogrammetry, the question of camera calibration is important. Freely available open-source user-friendly software for automatic camera calibration, often based on simple 2D chess-board patterns, are an answer to the demand for simplicity and automation. However, such tools cannot respond to all requirements met in cultural heritage conservation regarding possible imaging distances and focal lengths. Here we investigate the practical possibility of camera calibration from unknown planar objects, i.e. any planar surface with adequate texture; we have focused on the example of urban walls covered with graffiti. Images are connected pair-wise with inter-image homographies, which are estimated automatically through a RANSAC-based approach after extracting and matching interest points with the SIFT operator. All valid points are identified on all images on which they appear. Provided that the image set includes a "fronto-parallel" view, inter-image homographies with this image are regarded as emulations of image-to-world homographies and allow computing initial estimates for the interior and exterior orientation elements. Following this initialization step, the estimates are introduced into a final self-calibrating bundle adjustment. Measures are taken to discard unsuitable images and verify object planarity. Results from practical experimentation indicate that this method may produce satisfactory results. The authors intend to incorporate the described approach into their freely available user-friendly software tool, which relies on chess-boards, to assist non-experts in their projects with image-based approaches.
NASA Astrophysics Data System (ADS)
Norton, P. A., II; Haj, A. E., Jr.
2014-12-01
The United States Geological Survey is currently developing a National Hydrologic Model (NHM) to support and facilitate coordinated and consistent hydrologic modeling efforts at the scale of the continental United States. As part of this effort, the Geospatial Fabric (GF) for the NHM was created. The GF is a database that contains parameters derived from datasets that characterize the physical features of watersheds. The GF was used to aggregate catchments and flowlines defined in the National Hydrography Dataset Plus dataset for more than 100,000 hydrologic response units (HRUs), and to establish initial parameter values for input to the Precipitation-Runoff Modeling System (PRMS). Many parameter values are adjusted in PRMS using an automated calibration process. Using these adjusted parameter values, the PRMS model estimated variables such as evapotranspiration (ET), potential evapotranspiration (PET), snow-covered area (SCA), and snow water equivalent (SWE). In order to evaluate the effectiveness of parameter calibration, and model performance in general, several satellite-based Moderate Resolution Imaging Spectroradiometer (MODIS) and Snow Data Assimilation System (SNODAS) gridded datasets including ET, PET, SCA, and SWE were compared to PRMS-simulated values. The MODIS and SNODAS data were spatially averaged for each HRU, and compared to PRMS-simulated ET, PET, SCA, and SWE values for each HRU in the Upper Missouri River watershed. Default initial GF parameter values and PRMS calibration ranges were evaluated. Evaluation results, and the use of MODIS and SNODAS datasets to update GF parameter values and PRMS calibration ranges, are presented and discussed.
Simple and accurate quantification of BTEX in ambient air by SPME and GC-MS.
Baimatova, Nassiba; Kenessov, Bulat; Koziel, Jacek A; Carlsen, Lars; Bektassov, Marat; Demyanenko, Olga P
2016-07-01
Benzene, toluene, ethylbenzene and xylenes (BTEX) comprise one of the most ubiquitous and hazardous groups of ambient air pollutants of concern. Application of standard analytical methods for quantification of BTEX is limited by the complexity of sampling and sample preparation equipment, and budget requirements. Methods based on SPME represent simpler alternative, but still require complex calibration procedures. The objective of this research was to develop a simpler, low-budget, and accurate method for quantification of BTEX in ambient air based on SPME and GC-MS. Standard 20-mL headspace vials were used for field air sampling and calibration. To avoid challenges with obtaining and working with 'zero' air, slope factors of external standard calibration were determined using standard addition and inherently polluted lab air. For polydimethylsiloxane (PDMS) fiber, differences between the slope factors of calibration plots obtained using lab and outdoor air were below 14%. PDMS fiber provided higher precision during calibration while the use of Carboxen/PDMS fiber resulted in lower detection limits for benzene and toluene. To provide sufficient accuracy, the use of 20mL vials requires triplicate sampling and analysis. The method was successfully applied for analysis of 108 ambient air samples from Almaty, Kazakhstan. Average concentrations of benzene, toluene, ethylbenzene and o-xylene were 53, 57, 11 and 14µgm(-3), respectively. The developed method can be modified for further quantification of a wider range of volatile organic compounds in air. In addition, the new method is amenable to automation. Copyright © 2016 Elsevier B.V. All rights reserved.
Design and development of an ultrasound calibration phantom and system
NASA Astrophysics Data System (ADS)
Cheng, Alexis; Ackerman, Martin K.; Chirikjian, Gregory S.; Boctor, Emad M.
2014-03-01
Image-guided surgery systems are often used to provide surgeons with informational support. Due to several unique advantages such as ease of use, real-time image acquisition, and no ionizing radiation, ultrasound is a common medical imaging modality used in image-guided surgery systems. To perform advanced forms of guidance with ultrasound, such as virtual image overlays or automated robotic actuation, an ultrasound calibration process must be performed. This process recovers the rigid body transformation between a tracked marker attached to the ultrasound transducer and the ultrasound image. A phantom or model with known geometry is also required. In this work, we design and test an ultrasound calibration phantom and software. The two main considerations in this work are utilizing our knowledge of ultrasound physics to design the phantom and delivering an easy to use calibration process to the user. We explore the use of a three-dimensional printer to create the phantom in its entirety without need for user assembly. We have also developed software to automatically segment the three-dimensional printed rods from the ultrasound image by leveraging knowledge about the shape and scale of the phantom. In this work, we present preliminary results from using this phantom to perform ultrasound calibration. To test the efficacy of our method, we match the projection of the points segmented from the image to the known model and calculate a sum squared difference between each point for several combinations of motion generation and filtering methods. The best performing combination of motion and filtering techniques had an error of 1.56 mm and a standard deviation of 1.02 mm.
NASA Astrophysics Data System (ADS)
Brachmann, Johannes F. S.; Baumgartner, Andreas; Lenhard, Karim
2016-10-01
The Calibration Home Base (CHB) at the Remote Sensing Technology Institute of the German Aerospace Center (DLR-IMF) is an optical laboratory designed for the calibration of imaging spectrometers for the VNIR/SWIR wavelength range. Radiometric, spectral and geometric characterization is realized in the CHB in a precise and highly automated fashion. This allows performing a wide range of time consuming measurements in an efficient way. The implementation of ISO 9001 standards ensures a traceable quality of results. DLR-IMF will support the calibration and characterization campaign of the future German spaceborne hyperspectral imager EnMAP. In the context of this activity, a procedure for the correction of imaging artifacts, such as due to stray light, is currently being developed by DLR-IMF. Goal is the correction of in-band stray light as well as ghost images down to a level of a few digital numbers in the whole wavelength range 420-2450 nm. DLR-IMF owns a Norsk Elektro Optikks HySpex airborne imaging spectrometer system that has been thoroughly characterized. This system will be used to test stray light calibration procedures for EnMAP. Hyperspectral snapshot sensors offer the possibility to simultaneously acquire hyperspectral data in two dimensions. Recently, these rather new spectrometers have arisen much interest in the remote sensing community. Different designs are currently used for local area observation such as by use of small unmanned aerial vehicles (sUAV). In this context the CHB's measurement capabilities are currently extended such that a standard measurement procedure for these new sensors will be implemented.
Laboratory data on coarse-sediment transport for bedload-sampler calibrations
Hubbell, David Wellington; Stevens, H.H.; Skinner, J.V.; Beverage, J.P.
1987-01-01
A unique facility capable of recirculating and continuously measuring the transport rates of sediment particles ranging in size from about 1 to 75 millimeters in diameter was designed and used in an extensive program involving the calibration of bedload samplers. The facility consisted of a 9-footwide by 6-foot-deep by 272-foot-long rectangular channel that incorporated seven automated collection pans and a sedimentreturn system. The collection pans accumulated, weighed, and periodically dumped bedload falling through a slot in the channel floor. Variations of the Helley-Smith bedload sampler, an Arnhem sampler, and two VUV-type samplers were used to obtain transport rates for comparison with rates measured at the bedload slot (trap). Tests were conducted under 20 different hydraulic and sedimentologic conditions (runs) with 3 uniform-size bed materials and a bed-material mixture. Hydraulic and sedimentologic data collected concurrently with the calibration measurements are described and, in part, summarized in tabular and graphic form. Tables indicate the extent of the data, which are available on magnetic media. The information includes sediment-transport rates; particle-size distributions; water discharges, depths, and slopes; longitudinal profiles of streambed-surface elevations; and temporal records of streambed-surface elevations at fixed locations.
Scalable tuning of building models to hourly data
Garrett, Aaron; New, Joshua Ryan
2015-03-31
Energy models of existing buildings are unreliable unless calibrated so they correlate well with actual energy usage. Manual tuning requires a skilled professional, is prohibitively expensive for small projects, imperfect, non-repeatable, non-transferable, and not scalable to the dozens of sensor channels that smart meters, smart appliances, and cheap/ubiquitous sensors are beginning to make available today. A scalable, automated methodology is needed to quickly and intelligently calibrate building energy models to all available data, increase the usefulness of those models, and facilitate speed-and-scale penetration of simulation-based capabilities into the marketplace for actualized energy savings. The "Autotune'' project is a novel, model-agnosticmore » methodology which leverages supercomputing, large simulation ensembles, and big data mining with multiple machine learning algorithms to allow automatic calibration of simulations that match measured experimental data in a way that is deployable on commodity hardware. This paper shares several methodologies employed to reduce the combinatorial complexity to a computationally tractable search problem for hundreds of input parameters. Furthermore, accuracy metrics are provided which quantify model error to measured data for either monthly or hourly electrical usage from a highly-instrumented, emulated-occupancy research home.« less
Long term measurement network for FIFE
NASA Technical Reports Server (NTRS)
Blad, Blaine L.; Walter-Shea, Elizabeth A.; Hays, Cynthia J.
1988-01-01
The objectives were: to obtain selected instruments which were not standard equipment on the Portable Automated Mesometeorological (PAM) and Data Control Platform (DCP) stations; to assist in incorporation of these instruments onto the PAM and DCP stations; to help provide routine maintenance of the instruments; to conduct periodic instrument calibrations; and to repair or replace malfunctioning instruments when possible. All of the objectives were or will be met soon. All instruments and the necessary instrument stands were purchased or made and were available for inclusion on the PAM and DCP stations before the beginning of the IFC-1. Due to problems beyond control, the DCP stations experienced considerable difficulty in becoming operational. To fill some of the gaps caused by the DCP problems, Campbell CR21-X data loggers were installed and the data collected on cassette tapes. Periodic checks of all instruments were made, to maintain data quality, to make necessary adjustments in certain instruments, to replace malfunctioning instruments, and to provide instrument calibration. All instruments will be calibrated before the beginning of the 1988 growing season as soon as the weather permits access to all stations and provides conditions that are not too harsh to work in for extended periods of time.
High accuracy step gauge interferometer
NASA Astrophysics Data System (ADS)
Byman, V.; Jaakkola, T.; Palosuo, I.; Lassila, A.
2018-05-01
Step gauges are convenient transfer standards for the calibration of coordinate measuring machines. A novel interferometer for step gauge calibrations implemented at VTT MIKES is described. The four-pass interferometer follows Abbe’s principle and measures the position of the inductive probe attached to a measuring head. The measuring head of the instrument is connected to a balanced boom above the carriage by a piezo translation stage. A key part of the measuring head is an invar structure on which the inductive probe and the corner cubes of the measuring arm of the interferometer are attached. The invar structure can be elevated so that the probe is raised without breaking the laser beam. During probing, the bending of the probe and the interferometer readings are recorded and the measurement face position is extrapolated to zero force. The measurement process is fully automated and the face positions of the steps can be measured up to a length of 2 m. Ambient conditions are measured continuously and the refractive index of air is compensated for. Before measurements the step gauge is aligned with an integrated 2D coordinate measuring system. The expanded uncertainty of step gauge calibration is U=\\sqrt{{{(64 nm)}2}+{{(88× {{10}-9}L)}2}} .
NASA Astrophysics Data System (ADS)
Mitishita, E.; Costa, F.; Martins, M.
2017-05-01
Photogrammetric and Lidar datasets should be in the same mapping or geodetic frame to be used simultaneously in an engineering project. Nowadays direct sensor orientation is a common procedure used in simultaneous photogrammetric and Lidar surveys. Although the direct sensor orientation technologies provide a high degree of automation process due to the GNSS/INS technologies, the accuracies of the results obtained from the photogrammetric and Lidar surveys are dependent on the quality of a group of parameters that models accurately the user conditions of the system at the moment the job is performed. This paper shows the study that was performed to verify the importance of the in situ camera calibration and Integrated Sensor Orientation without control points to increase the accuracies of the photogrammetric and LIDAR datasets integration. The horizontal and vertical accuracies of photogrammetric and Lidar datasets integration by photogrammetric procedure improved significantly when the Integrated Sensor Orientation (ISO) approach was performed using Interior Orientation Parameter (IOP) values estimated from the in situ camera calibration. The horizontal and vertical accuracies, estimated by the Root Mean Square Error (RMSE) of the 3D discrepancies from the Lidar check points, increased around of 37% and 198% respectively.
Using an Automated 3D-tracking System to Record Individual and Shoals of Adult Zebrafish
Maaswinkel, Hans; Zhu, Liqun; Weng, Wei
2013-01-01
Like many aquatic animals, zebrafish (Danio rerio) moves in a 3D space. It is thus preferable to use a 3D recording system to study its behavior. The presented automatic video tracking system accomplishes this by using a mirror system and a calibration procedure that corrects for the considerable error introduced by the transition of light from water to air. With this system it is possible to record both single and groups of adult zebrafish. Before use, the system has to be calibrated. The system consists of three modules: Recording, Path Reconstruction, and Data Processing. The step-by-step protocols for calibration and using the three modules are presented. Depending on the experimental setup, the system can be used for testing neophobia, white aversion, social cohesion, motor impairments, novel object exploration etc. It is especially promising as a first-step tool to study the effects of drugs or mutations on basic behavioral patterns. The system provides information about vertical and horizontal distribution of the zebrafish, about the xyz-components of kinematic parameters (such as locomotion, velocity, acceleration, and turning angle) and it provides the data necessary to calculate parameters for social cohesions when testing shoals. PMID:24336189
Meteor44 Video Meteor Photometry
NASA Technical Reports Server (NTRS)
Swift, Wesley R.; Suggs, Robert M.; Cooke, William J.
2004-01-01
Meteor44 is a software system developed at MSFC for the calibration and analysis of video meteor data. The dynamic range of the (8bit) video data is extended by approximately 4 magnitudes for both meteors and stellar images using saturation compensation. Camera and lens specific saturation compensation coefficients are derived from artificial variable star laboratory measurements. Saturation compensation significantly increases the number of meteors with measured intensity and improves the estimation of meteoroid mass distribution. Astrometry is automated to determine each image s plate coefficient using appropriate star catalogs. The images are simultaneously intensity calibrated from the contained stars to determine the photon sensitivity and the saturation level referenced above the atmosphere. The camera s spectral response is used to compensate for stellar color index and typical meteor spectra in order to report meteor light curves in traditional visual magnitude units. Recent efforts include improved camera calibration procedures, long focal length "streak" meteor photome&y and two-station track determination. Meteor44 has been used to analyze data from the 2001.2002 and 2003 MSFC Leonid observational campaigns as well as several lesser showers. The software is interactive and can be demonstrated using data from recent Leonid campaigns.
Klauschen, Frederick; Wienert, Stephan; Schmitt, Wolfgang D; Loibl, Sibylle; Gerber, Bernd; Blohmer, Jens-Uwe; Huober, Jens; Rüdiger, Thomas; Erbstößer, Erhard; Mehta, Keyur; Lederer, Bianca; Dietel, Manfred; Denkert, Carsten; von Minckwitz, Gunter
2015-08-15
Scoring proliferation through Ki67 immunohistochemistry is an important component in predicting therapy response to chemotherapy in patients with breast cancer. However, recent studies have cast doubt on the reliability of "visual" Ki67 scoring in the multicenter setting, particularly in the lower, yet clinically important, proliferation range. Therefore, an accurate and standardized Ki67 scoring is pivotal both in routine diagnostics and larger multicenter studies. We validated a novel fully automated Ki67 scoring approach that relies on only minimal a priori knowledge on cell properties and requires no training data for calibration. We applied our approach to 1,082 breast cancer samples from the neoadjuvant GeparTrio trial and compared the performance of automated and manual Ki67 scoring. The three groups of autoKi67 as defined by low (≤ 15%), medium (15.1%-35%), and high (>35%) automated scores showed pCR rates of 5.8%, 16.9%, and 29.5%, respectively. AutoKi67 was significantly linked to prognosis with overall and progression-free survival P values P(OS) < 0.0001 and P(PFS) < 0.0002, compared with P(OS) < 0.0005 and P(PFS) < 0.0001 for manual Ki67 scoring. Moreover, automated Ki67 scoring was an independent prognosticator in the multivariate analysis with P(OS) = 0.002, P(PFS) = 0.009 (autoKi67) versus P(OS) = 0.007, PPFS = 0.004 (manual Ki67). The computer-assisted Ki67 scoring approach presented here offers a standardized means of tumor cell proliferation assessment in breast cancer that correlated with clinical endpoints and is deployable in routine diagnostics. It may thus help to solve recently reported reliability concerns in Ki67 diagnostics. ©2014 American Association for Cancer Research.
FMEA of manual and automated methods for commissioning a radiotherapy treatment planning system.
Wexler, Amy; Gu, Bruce; Goddu, Sreekrishna; Mutic, Maya; Yaddanapudi, Sridhar; Olsen, Lindsey; Harry, Taylor; Noel, Camille; Pawlicki, Todd; Mutic, Sasa; Cai, Bin
2017-09-01
To evaluate the level of risk involved in treatment planning system (TPS) commissioning using a manual test procedure, and to compare the associated process-based risk to that of an automated commissioning process (ACP) by performing an in-depth failure modes and effects analysis (FMEA). The authors collaborated to determine the potential failure modes of the TPS commissioning process using (a) approaches involving manual data measurement, modeling, and validation tests and (b) an automated process utilizing application programming interface (API) scripting, preloaded, and premodeled standard radiation beam data, digital heterogeneous phantom, and an automated commissioning test suite (ACTS). The severity (S), occurrence (O), and detectability (D) were scored for each failure mode and the risk priority numbers (RPN) were derived based on TG-100 scale. Failure modes were then analyzed and ranked based on RPN. The total number of failure modes, RPN scores and the top 10 failure modes with highest risk were described and cross-compared between the two approaches. RPN reduction analysis is also presented and used as another quantifiable metric to evaluate the proposed approach. The FMEA of a MTP resulted in 47 failure modes with an RPN ave of 161 and S ave of 6.7. The highest risk process of "Measurement Equipment Selection" resulted in an RPN max of 640. The FMEA of an ACP resulted in 36 failure modes with an RPN ave of 73 and S ave of 6.7. The highest risk process of "EPID Calibration" resulted in an RPN max of 576. An FMEA of treatment planning commissioning tests using automation and standardization via API scripting, preloaded, and pre-modeled standard beam data, and digital phantoms suggests that errors and risks may be reduced through the use of an ACP. © 2017 American Association of Physicists in Medicine.
TFTR CAMAC systems and components
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rauch, W.A.; Bergin, W.; Sichta, P.
1987-08-01
Princeton's tokamak fusion test reactor (TFTR) utilizes Computer Automated Measurement and Control (CAMAC) to provide instrumentation for real and quasi real time control, monitoring, and data acquisition systems. This paper describes and discusses the complement of CAMAC hardware systems and components that comprise the interface for tokamak control and measurement instrumentation, and communication with the central instrumentation control and data acquisition (CICADA) system. It also discusses CAMAC reliability and calibration, types of modules used, a summary of data acquisition and control points, and various diagnostic maintenance tools used to support and troubleshoot typical CAMAC systems on TFTR.
3D measurement by digital photogrammetry
NASA Astrophysics Data System (ADS)
Schneider, Carl T.
1993-12-01
Photogrammetry is well known in geodetic surveys as aerial photogrammetry or close range applications as architectural photogrammetry. The photogrammetric methods and algorithms combined with digital cameras and digital image processing methods are now introduced for industrial applications as automation and quality control. The presented paper will describe the photogrammetric and digital image processing algorithms and the calibration methods. These algorithms and methods were demonstrated with application examples. These applications are a digital photogrammetric workstation as a mobil multi purpose 3D measuring tool and a tube measuring system as an example for a single purpose tool.
Evaluating Corrosion in SAVY Containers using Non-Destructive Techniques
DOE Office of Scientific and Technical Information (OSTI.GOV)
Davenport, Matthew Nicholas; Vaidya, Rajendra U.; Abeyta, Adrian Anthony
Powerpoint presentation on Ultrasonic and Eddy Current NDT; UT Theory; Eddy current (ECA): How it works; Controlled Corrosion at NM Tech; Results – HCl Corrosion; Waveform Data for 10M HCl; Accuracy Statistics; Results – FeCl 3 Pitting; Waveforms for Anhydrous FeCl 3; Analyzing Corroded Stainless Steel 316L Plates; 316L Plate to Imitate Pitting; ECA Pit Depth Calibration Curve; C Scan Imaging; UT Pit Detection; SST Containers: Ultrasonic (UT) vs. CMM; UT Data Analysis; UT Conclusions and Observations; ECA Conclusions; Automated System Vision.
NASA Technical Reports Server (NTRS)
1981-01-01
The modified CG2000 crystal grower construction, installation, and machine check out was completed. The process development check out proceeded with several dry runs and one growth run. Several machine calibrations and functional problems were discovered and corrected. Exhaust gas analysis system alternatives were evaluated and an integrated system approved and ordered. Several growth runs on a development CG2000 RC grower show that complete neck, crown, and body automated growth can be achieved with only one operator input.
Butler, Kenneth R; Minor, Deborah S; Benghuzzi, Hamed A; Tucci, Michelle
2010-01-01
The objective of this study was to evaluate terminal digit preference in blood pressure (BP) measurements taken from a sample of clinics at a large academic health sciences center. We hypothesized that terminal digit preference would occur more frequently in BP measurements taken with manual mercury sphygmomanometry compared to those obtained with semi-automated instruments. A total of 1,393 BP measures were obtained in 16 ambulatory and inpatient sites by personnel using both mercury (n=1,286) and semi-automated (n=107) devices For the semi-automated devices, a trained observer repeated the patients BP following American Heart Association recommendations using a similar device with a known calibration history. At least two recorded systolic and diastolic blood pressures (average of two or more readings for each) were obtained for all manual mercury readings. Data were evaluated using descriptive statistics and Chi square as appropriate (SPSS software, 17.0). Overall, zero and other terminal digit preference was observed more frequently in systolic (?2 = 883.21, df = 9, p < 0.001) and diastolic readings (?2 = 1076.77, df = 9, p < 0.001) from manual instruments, while all end digits obtained by clinic staff using semi-automated devices were more evenly distributed (?2 = 8.23, df = 9, p = 0.511 for systolic and ?2 = 10.48, df = 9, p = 0.313 for diastolic). In addition to zero digit bias in mercury readings, even numbers were reported with significantly higher frequency than odd numbers. There was no detectable digit preference observed when examining semi-automated measurements by clinic staff or device type for either systolic or diastolic BP measures. These findings demonstrate that terminal digit preference was more likely to occur with manual mercury sphygmomanometry. This phenomenon was most likely the result of mercury column graduation in 2 mm Hg increments producing a higher than expected frequency of even digits.
Towards free 3D end-point control for robotic-assisted human reaching using binocular eye tracking.
Maimon-Dror, Roni O; Fernandez-Quesada, Jorge; Zito, Giuseppe A; Konnaris, Charalambos; Dziemian, Sabine; Faisal, A Aldo
2017-07-01
Eye-movements are the only directly observable behavioural signals that are highly correlated with actions at the task level, and proactive of body movements and thus reflect action intentions. Moreover, eye movements are preserved in many movement disorders leading to paralysis (or amputees) from stroke, spinal cord injury, Parkinson's disease, multiple sclerosis, and muscular dystrophy among others. Despite this benefit, eye tracking is not widely used as control interface for robotic interfaces in movement impaired patients due to poor human-robot interfaces. We demonstrate here how combining 3D gaze tracking using our GT3D binocular eye tracker with custom designed 3D head tracking system and calibration method enables continuous 3D end-point control of a robotic arm support system. The users can move their own hand to any location of the workspace by simple looking at the target and winking once. This purely eye tracking based system enables the end-user to retain free head movement and yet achieves high spatial end point accuracy in the order of 6 cm RMSE error in each dimension and standard deviation of 4 cm. 3D calibration is achieved by moving the robot along a 3 dimensional space filling Peano curve while the user is tracking it with their eyes. This results in a fully automated calibration procedure that yields several thousand calibration points versus standard approaches using a dozen points, resulting in beyond state-of-the-art 3D accuracy and precision.
Monitoring stream sediment loads in response to agriculture in Prince Edward Island, Canada.
Alberto, Ashley; St-Hilaire, Andre; Courtenay, Simon C; van den Heuvel, Michael R
2016-07-01
Increased agricultural land use leads to accelerated erosion and deposition of fine sediment in surface water. Monitoring of suspended sediment yields has proven challenging due to the spatial and temporal variability of sediment loading. Reliable sediment yield calculations depend on accurate monitoring of these highly episodic sediment loading events. This study aims to quantify precipitation-induced loading of suspended sediments on Prince Edward Island, Canada. Turbidity is considered to be a reasonably accurate proxy for suspended sediment data. In this study, turbidity was used to monitor suspended sediment concentration (SSC) and was measured for 2 years (December 2012-2014) in three subwatersheds with varying degrees of agricultural land use ranging from 10 to 69 %. Comparison of three turbidity meter calibration methods, two using suspended streambed sediment and one using automated sampling during rainfall events, revealed that the use of SSC samples constructed from streambed sediment was not an accurate replacement for water column sampling during rainfall events for calibration. Different particle size distributions in the three rivers produced significant impacts on the calibration methods demonstrating the need for river-specific calibration. Rainfall-induced sediment loading was significantly greater in the most agriculturally impacted site only when the load per rainfall event was corrected for runoff volume (total flow minus baseflow), flow increase intensity (the slope between the start of a runoff event and the peak of the hydrograph), and season. Monitoring turbidity, in combination with sediment modeling, may offer the best option for management purposes.
Vessel calibre and flow splitting relationships at the internal carotid artery terminal bifurcation.
Chnafa, C; Bouillot, P; Brina, O; Delattre, B M A; Vargas, M I; Lovblad, K O; Pereira, V M; Steinman, D A
2017-11-01
Vessel lumen calibres and flow rates are thought to be related by mathematical power laws, reflecting the optimization of cardiac versus metabolic work. While these laws have been confirmed indirectly via measurement of branch calibres, there is little data confirming power law relationships of flow distribution to branch calibres at individual bifurcations. Flow rates and diameters of parent and daughter vessels of the internal carotid artery terminal bifurcation were determined, via robust and automated methods, from 4D phase-contrast magnetic resonance imaging and 3D rotational angiography of 31 patients. Junction exponents were 2.06 ± 0.44 for relating parent to daughter branch diameters (geometrical exponent), and 2.45 ± 0.75 for relating daughter branch diameters to their flow division (flow split exponent). These exponents were not significantly different, but showed large inter- and intra-individual variations, and with confidence intervals excluding the theoretical optimum of 3. Power law fits of flow split versus diameter ratio and pooled flow rates versus diameters showed exponents of 2.17 and 1.96, respectively. A significant negative correlation was found between age and the geometrical exponent (r = -0.55, p = 0.003) but not the flow split exponent. We also found a dependence of our results on how lumen diameter is measured, possibly explaining some of the variability in the literature. Our study confirms that, on average, division of flow to the middle and anterior cerebral arteries is related to these vessels' relative calibres via a power law, but it is closer to a square law than a cube law as commonly assumed.
NASA Technical Reports Server (NTRS)
Scarino, Benjamin; Doelling, David R.; Haney, Conor; Bedka, Kristopher; Minnis, Patrick; Gopalan, Arun; Bhatt, Rajendra
2017-01-01
Accurate characterization of the Earth's radiant energy is critical for many climate monitoring and weather forecasting applications. For example, groups at the NASA Langley Research Center rely on stable visible- and infrared-channel calibrations in order to understand the temporal/spatial distribution of hazardous storms, as determined from an automated overshooting convective top detection algorithm. Therefore, in order to facilitate reliable, climate-quality retrievals, it is important that consistent calibration coefficients across satellite platforms are made available to the remote sensing community, and that calibration anomalies are recognized and mitigated. One such anomaly is the infrared imager brightness temperature (BT) drift that occurs for some Geostationary Earth Orbit satellite (GEOsat) instruments near local midnight. Currently the Global Space-Based Inter-Calibration System (GSICS) community uses the hyperspectral Infrared Atmospheric Sounding Interferometer (IASI) sensor as a common reference to uniformly calibrate GEOsat IR imagers. However, the combination of IASI, which has a 21:30 local equator crossing time (LECT), and hyperspectral Atmospheric Infrared Sounder (AIRS; 01:30 LECT) observations are unable to completely resolve the GEOsat midnight BT bias. The precessing orbit of the Tropical Rainfall Measuring Mission (TRMM) Visible and Infrared Scanner (VIRS), however, allows sampling of all local hours every 46 days. Thus, VIRS has the capability to quantify the BT midnight effect observed in concurrent GEOsat imagers. First, the VIRS IR measurements are evaluated for long-term temporal stability between 2002 and 2012 by inter-calibrating with Aqua-MODIS. Second, the VIRS IR measurements are assessed for diurnal stability by inter-calibrating with Meteosat-9 (Met-9), a spin-stabilized GEOsat imager that does not manifest any diurnal dependency. In this case, the Met-9 IR imager is first adjusted with the official GSICS calibration coefficients. Then VIRS is used as a diurnal calibration reference transfer to produce hourly corrections of GEOsat IR imager BT. For the 9 three-axis stabilized GEO imagers concurrent with VIRS, the midnight effect increased the BT on average by 0.5 K (11 microns) and 0.4 K (12 microns), with a peak at approx.01:00 local time. As expected, the spin-stabilized GEOsats revealed a smaller diurnal temperature cycle (mostly < 0.2 K) with inconsistent peak hours.
NASA Astrophysics Data System (ADS)
Scarino, Benjamin; Doelling, David R.; Haney, Conor; Bedka, Kristopher; Minnis, Patrick; Gopalan, Arun; Bhatt, Rajendra
2017-08-01
Accurate characterization of the Earth's radiant energy is critical for many climate monitoring and weather forecasting applications. For example, groups at the NASA Langley Research Center rely on stable visible- and infraredchannel calibrations in order to understand the temporal/spatial distribution of hazardous storms, as determined from an automated overshooting convective top detection algorithm. Therefore, in order to facilitate reliable, climate-quality retrievals, it is important that consistent calibration coefficients across satellite platforms are made available to the remote sensing community, and that calibration anomalies are recognized and mitigated. One such anomaly is the infrared imager brightness temperature (BT) drift that occurs for some Geostationary Earth Orbit satellite (GEOsat) instruments near local midnight. Currently the Global Space-Based Inter-Calibration System (GSICS) community uses the hyperspectral Infrared Atmospheric Sounding Interferometer (IASI) sensor as a common reference to uniformly calibrate GEOsat IR imagers. However, the combination of IASI, which has a 21:30 local equator crossing time (LECT), and hyperspectral Atmospheric Infrared Sounder (AIRS; 01:30 LECT) observations are unable to completely resolve the GEOsat midnight BT bias. The precessing orbit of the Tropical Rainfall Measuring Mission (TRMM) Visible and Infrared Scanner (VIRS), however, allows sampling of all local hours every 46 days. Thus, VIRS has the capability to quantify the BT midnight effect observed in concurrent GEOsat imagers. First, the VIRS IR measurements are evaluated for long-term temporal stability between 2002 and 2012 by inter-calibrating with Aqua-MODIS. Second, the VIRS IR measurements are assessed for diurnal stability by inter-calibrating with Meteosat-9 (Met-9), a spin-stabilized GEOsat imager that does not manifest any diurnal dependency. In this case, the Met-9 IR imager is first adjusted with the official GSICS calibration coefficients. Then VIRS is used as a diurnal calibration reference transfer to produce hourly corrections of GEOsat IR imager BT. For the 9 three-axis stabilized GEO imagers concurrent with VIRS, the midnight effect increased the BT on average by 0.5 K (11 μm) and 0.4 K (12 μm), with a peak at 01:00 local time. As expected, the spin-stabilized GEOsats revealed a smaller diurnal temperature cycle (mostly < 0.2 K) with inconsistent peak hours.
Investigations on blood coagulation in the green iguana (Iguana iguana).
Kubalek, S; Mischke, R; Fehr, M
2002-05-01
The prothrombin time (PT), activated partial thromboplastin time (APTT), thrombin time, kaolin clotting time (KCT), dilute Russell's viper venom time (DRVVT) and reptilase time, as well as five different plasma fibrinogen assays [gravimetry, Jacobsson method (extinction at 280 nm), Millar method (heat precipitation), kinetic turbidometry, Clauss method] and resonance thrombography were performed in 26 clinically healthy green iguanas. All assays were carried out in comparison with pooled normal canine plasma. In iguana plasma, the PT [median (x0.50) = 453-831 s, dependent on the reagent], APTT (x0.50 = 170-242 s, dependent on the reagent), thrombin time (x0.50 = 118 - > 1000 s, dependent on thrombin activity), KCT (x0.50 = 274 s), DRVVT (x0.50 = 349 s) and reptilase time (all samples > 1000 s) were widely scattered at the limit of measurability. Only fibrinogen concentrations measured using the Jacobsson method (x0.50 = 4.40 g/l) correlated well (r = 0.91) with gravimetry (x0.50 = 4.22 g/l). The results of this study indicate a limited suitability and a confined diagnostic significance of the selected methods in the green iguana. This may be caused by the species specificity of certain components of the reagents used, as well as a less optimal test system, i.e. relationship of test reagent to clotting factor concentrations in iguana plasma.
Levander, Fredrik; James, Peter
2005-01-01
The identification of proteins separated on two-dimensional gels is most commonly performed by trypsin digestion and subsequent matrix-assisted laser desorption ionization (MALDI) with time-of-flight (TOF). Recently, atmospheric pressure (AP) MALDI coupled to an ion trap (IT) has emerged as a convenient method to obtain tandem mass spectra (MS/MS) from samples on MALDI target plates. In the present work, we investigated the feasibility of using the two methodologies in line as a standard method for protein identification. In this setup, the high mass accuracy MALDI-TOF spectra are used to calibrate the peptide precursor masses in the lower mass accuracy AP-MALDI-IT MS/MS spectra. Several software tools were developed to automate the analysis process. Two sets of MALDI samples, consisting of 142 and 421 gel spots, respectively, were analyzed in a highly automated manner. In the first set, the protein identification rate increased from 61% for MALDI-TOF only to 85% for MALDI-TOF combined with AP-MALDI-IT. In the second data set the increase in protein identification rate was from 44% to 58%. AP-MALDI-IT MS/MS spectra were in general less effective than the MALDI-TOF spectra for protein identification, but the combination of the two methods clearly enhanced the confidence in protein identification.
Automated facial acne assessment from smartphone images
NASA Astrophysics Data System (ADS)
Amini, Mohammad; Vasefi, Fartash; Valdebran, Manuel; Huang, Kevin; Zhang, Haomiao; Kemp, William; MacKinnon, Nicholas
2018-02-01
A smartphone mobile medical application is presented, that provides analysis of the health of skin on the face using a smartphone image and cloud-based image processing techniques. The mobile application employs the use of the camera to capture a front face image of a subject, after which the captured image is spatially calibrated based on fiducial points such as position of the iris of the eye. A facial recognition algorithm is used to identify features of the human face image, to normalize the image, and to define facial regions of interest (ROI) for acne assessment. We identify acne lesions and classify them into two categories: those that are papules and those that are pustules. Automated facial acne assessment was validated by performing tests on images of 60 digital human models and 10 real human face images. The application was able to identify 92% of acne lesions within five facial ROIs. The classification accuracy for separating papules from pustules was 98%. Combined with in-app documentation of treatment, lifestyle factors, and automated facial acne assessment, the app can be used in both cosmetic and clinical dermatology. It allows users to quantitatively self-measure acne severity and treatment efficacy on an ongoing basis to help them manage their chronic facial acne.
Dooraghi, Alex A.; Carroll, Lewis; Collins, Jeffrey; ...
2016-03-09
Automated protocols for measuring and dispensing solutions containing radioisotopes are essential not only for providing a safe environment for radiation workers but also to ensure accuracy of dispensed radioactivity and an efficient workflow. For this purpose, we have designed ARAS, an automated radioactivity aliquoting system for dispensing solutions containing positron-emitting radioisotopes with particular focus on fluorine-18 (18F). The key to the system is the combination of a radiation detector measuring radioactivity concentration, in line with a peristaltic pump dispensing known volumes. Results show the combined system demonstrates volume variation to be within 5 % for dispensing volumes of 20 μLmore » or greater. When considering volumes of 20 μL or greater, the delivered radioactivity is in agreement with the requested amount as measured independently with a dose calibrator to within 2 % on average. In conclusion, the integration of the detector and pump in an in-line system leads to a flexible and compact approach that can accurately dispense solutions containing radioactivity concentrations ranging from the high values typical of [18F]fluoride directly produced from a cyclotron (~0.1-1 mCi μL -1) to the low values typical of batches of [18F]fluoride-labeled radiotracers intended for preclinical mouse scans (~1-10 μCi μL -1).« less
Theanponkrang, Somjai; Suginta, Wipa; Weingart, Helge; Winterhalter, Mathias; Schulte, Albert
2015-01-01
A new automated pharmacoanalytical technique for convenient quantification of redox-active antibiotics has been established by combining the benefits of a carbon nanotube (CNT) sensor modification with electrocatalytic activity for analyte detection with the merits of a robotic electrochemical device that is capable of sequential nonmanual sample measurements in 24-well microtiter plates. Norfloxacin (NFX) and ciprofloxacin (CFX), two standard fluoroquinolone antibiotics, were used in automated calibration measurements by differential pulse voltammetry (DPV) and accomplished were linear ranges of 1-10 μM and 2-100 μM for NFX and CFX, respectively. The lowest detectable levels were estimated to be 0.3±0.1 μM (n=7) for NFX and 1.6±0.1 μM (n=7) for CFX. In standard solutions or tablet samples of known content, both analytes could be quantified with the robotic DPV microtiter plate assay, with recoveries within ±4% of 100%. And recoveries were as good when NFX was evaluated in human serum samples with added NFX. The use of simple instrumentation, convenience in execution, and high effectiveness in analyte quantitation suggest the merger between automated microtiter plate voltammetry and CNT-supported electrochemical drug detection as a novel methodology for antibiotic testing in pharmaceutical and clinical research and quality control laboratories.
An Automated Algorithm for Identifying and Tracking Transverse Waves in Solar Images
NASA Astrophysics Data System (ADS)
Weberg, Micah J.; Morton, Richard J.; McLaughlin, James A.
2018-01-01
Recent instrumentation has demonstrated that the solar atmosphere supports omnipresent transverse waves, which could play a key role in energizing the solar corona. Large-scale studies are required in order to build up an understanding of the general properties of these transverse waves. To help facilitate this, we present an automated algorithm for identifying and tracking features in solar images and extracting the wave properties of any observed transverse oscillations. We test and calibrate our algorithm using a set of synthetic data, which includes noise and rotational effects. The results indicate an accuracy of 1%–2% for displacement amplitudes and 4%–10% for wave periods and velocity amplitudes. We also apply the algorithm to data from the Atmospheric Imaging Assembly on board the Solar Dynamics Observatory and find good agreement with previous studies. Of note, we find that 35%–41% of the observed plumes exhibit multiple wave signatures, which indicates either the superposition of waves or multiple independent wave packets observed at different times within a single structure. The automated methods described in this paper represent a significant improvement on the speed and quality of direct measurements of transverse waves within the solar atmosphere. This algorithm unlocks a wide range of statistical studies that were previously impractical.
NASA Astrophysics Data System (ADS)
Pratt, P.
2012-12-01
Ocean color bands on VIIRS span the visible spectrum and include two NIR bands. There are sixteen detectors per band and two HAM (Half-angle mirror) sides giving a total of thirty two independent systems. For each scan, thirty two hundred pixels are collected and each has a fixed specific optical path and a dynamic position relative to the earth geoid. For a given calibration target where scene variation is minimized, sensor characteristics can be observed. This gives insight into the performance and calibration of the instrument from a sensor-centric perspective. Calibration of the blue bands is especially challenging since there are few blue targets on land. An ocean region called the South Pacific Gyre (SPG) was chosen for its known stability and large area to serve as a calibration target for this investigation. Thousands of pixels from every granule that views the SPG are collected daily through an automated system and tabulated along with the detector, HAM and scan position. These are then collated and organized in a sensor-centric set of tables. The data are then analyzed by slicing by each variable and then plotted in a number of ways over time. Trends in the data show that the VIIRS sensor is largely behaving as expected according to heritage data and also reveals weaknesses where additional characterization of the sensor is possible. This work by Northrop Grumman NPP CalVal Team is supporting the VIIRS on-orbit calibration and validation teams for the sensor and ocean color as well as providing scientists interested in performing ground truth with results that show which detectors and scan angles are the most reliable over time. This novel approach offers a comprehensive sensor-centric on-orbit characterization of the VIIRS instrument on the NASA Suomi NPP mission.
Large-N correlator systems for low frequency radio astronomy
NASA Astrophysics Data System (ADS)
Foster, Griffin
Low frequency radio astronomy has entered a second golden age driven by the development of a new class of large-N interferometric arrays. The low frequency array (LOFAR) and a number of redshifted HI Epoch of Reionization (EoR) arrays are currently undergoing commission and regularly observing. Future arrays of unprecedented sensitivity and resolutions at low frequencies, such as the square kilometer array (SKA) and the hydrogen epoch of reionization array (HERA), are in development. The combination of advancements in specialized field programmable gate array (FPGA) hardware for signal processing, computing and graphics processing unit (GPU) resources, and new imaging and calibration algorithms has opened up the oft underused radio band below 300 MHz. These interferometric arrays require efficient implementation of digital signal processing (DSP) hardware to compute the baseline correlations. FPGA technology provides an optimal platform to develop new correlators. The significant growth in data rates from these systems requires automated software to reduce the correlations in real time before storing the data products to disk. Low frequency, widefield observations introduce a number of unique calibration and imaging challenges. The efficient implementation of FX correlators using FPGA hardware is presented. Two correlators have been developed, one for the 32 element BEST-2 array at Medicina Observatory and the other for the 96 element LOFAR station at Chilbolton Observatory. In addition, calibration and imaging software has been developed for each system which makes use of the radio interferometry measurement equation (RIME) to derive calibrations. A process for generating sky maps from widefield LOFAR station observations is presented. Shapelets, a method of modelling extended structures such as resolved sources and beam patterns has been adapted for radio astronomy use to further improve system calibration. Scaling of computing technology allows for the development of larger correlator systems, which in turn allows for improvements in sensitivity and resolution. This requires new calibration techniques which account for a broad range of systematic effects.
Coarse-Grained Models for Automated Fragmentation and Parametrization of Molecular Databases.
Fraaije, Johannes G E M; van Male, Jan; Becherer, Paul; Serral Gracià, Rubèn
2016-12-27
We calibrate coarse-grained interaction potentials suitable for screening large data sets in top-down fashion. Three new algorithms are introduced: (i) automated decomposition of molecules into coarse-grained units (fragmentation); (ii) Coarse-Grained Reference Interaction Site Model-Hypernetted Chain (CG RISM-HNC) as an intermediate proxy for dissipative particle dynamics (DPD); and (iii) a simple top-down coarse-grained interaction potential/model based on activity coefficient theories from engineering (using COSMO-RS). We find that the fragment distribution follows Zipf and Heaps scaling laws. The accuracy in Gibbs energy of mixing calculations is a few tenths of a kilocalorie per mole. As a final proof of principle, we use full coarse-grained sampling through DPD thermodynamics integration to calculate log P OW for 4627 compounds with an average error of 0.84 log unit. The computational speeds per calculation are a few seconds for CG RISM-HNC and a few minutes for DPD thermodynamic integration.
Java Tool Framework for Automation of Hardware Commissioning and Maintenance Procedures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ho, J C; Fisher, J M; Gordon, J B
2007-10-02
The National Ignition Facility (NIF) is a 192-beam laser system designed to study high energy density physics. Each beam line contains a variety of line replaceable units (LRUs) that contain optics, stepping motors, sensors and other devices to control and diagnose the laser. During commissioning and subsequent maintenance of the laser, LRUs undergo a qualification process using the Integrated Computer Control System (ICCS) to verify and calibrate the equipment. The commissioning processes are both repetitive and tedious when we use remote manual computer controls, making them ideal candidates for software automation. Maintenance and Commissioning Tool (MCT) software was developed tomore » improve the efficiency of the qualification process. The tools are implemented in Java, leveraging ICCS services and CORBA to communicate with the control devices. The framework provides easy-to-use mechanisms for handling configuration data, task execution, task progress reporting, and generation of commissioning test reports. The tool framework design and application examples will be discussed.« less
NASA Astrophysics Data System (ADS)
Samson, Arnaud; Thibaudeau, Christian; Bouchard, Jonathan; Gaudin, Émilie; Paulin, Caroline; Lecomte, Roger; Fontaine, Réjean
2018-05-01
A fully automated time alignment method based on a positron timing probe was developed to correct the channel-to-channel coincidence time dispersion of the LabPET II avalanche photodiode-based positron emission tomography (PET) scanners. The timing probe was designed to directly detect positrons and generate an absolute time reference. The probe-to-channel coincidences are recorded and processed using firmware embedded in the scanner hardware to compute the time differences between detector channels. The time corrections are then applied in real-time to each event in every channel during PET data acquisition to align all coincidence time spectra, thus enhancing the scanner time resolution. When applied to the mouse version of the LabPET II scanner, the calibration of 6 144 channels was performed in less than 15 min and showed a 47% improvement on the overall time resolution of the scanner, decreasing from 7 ns to 3.7 ns full width at half maximum (FWHM).
Development of an automated on-line electrochemical chlorite ion sensor.
Myers, John N; Steinecker, William H; Sandlin, Zechariah D; Cox, James A; Gordon, Gilbert; Pacey, Gilbert E
2012-05-30
A sensor system for the automatic, in-line, determination of chlorite ion is reported. Electroanalytical measurements were performed in electrolyte-free liquids by using an electrochemical probe (EC), which enables in-line detection in high-resistance media such as disinfected water. Cyclic voltammetry scan rate studies suggest that the current arising from the oxidation of chlorite ion at an EC probe is mass-transfer limited. By coupling FIA with an EC probe amperometric cell, automated analysis was achieved. This sensor is intended to fulfill the daily monitoring requirements of the EPA DBP regulations for chlorite ion. Detection limits of 0.02-0.13 mg/L were attained, which is about one order of magnitude below the MRDL. The sensor showed no faradaic signal for perchlorate, chlorate, or nitrate. The lifetime and stability of the sensor were investigated by measuring calibration curves over time under constant-flow conditions. Detection limits of <0.1 mg/L were repeatedly achieved over a period of three weeks. Copyright © 2012 Elsevier B.V. All rights reserved.
“Development of an Automated On-line Electrochemical Chlorite Ion Sensor”
Myers, John N.; Steinecker, William H.; Sandlin, Zechariah D.; Cox, James A.; Gordon, Gilbert; Pacey, Gilbert E.
2012-01-01
A sensor system for the automatic, in-line, determination of chlorite ion is reported. Electroanalytical measurements were performed in electrolyte-free liquids by using an electrochemical probe (EC), which enables in-line detection in high-resistance media such as disinfected water. Cyclic voltammetry scan rate studies suggest that the current arising from the oxidation of chlorite ion at an EC probe is mass-transfer limited. By coupling FIA with an EC probe amperometric cell, automated analysis was achieved. This sensor is intended to fulfill the daily monitoring requirements of the EPA DBP regulations for chlorite ion. Detection limits of 0.02-0.13 mg/L were attained, which is about one order of magnitude below the MRDL. The sensor showed no faradaic signal for perchlorate, chlorate, or nitrate. The lifetime and stability of the sensor were investigated by measuring calibration curves over time under constant-flow conditions. Detection limits of <0.1 mg/L were repeatedly achieved over a period of three weeks. PMID:22608440
Advanced Data Acquisition Systems with Self-Healing Circuitry
NASA Technical Reports Server (NTRS)
Larson, William E.; Ihlefeld, Curtis M.; Medelius, Pedro J.; Delgado, H. (Technical Monitor)
2001-01-01
Kennedy Space Center's Spaceport Engineering & Technology Directorate has developed a data acquisition system that will help drive down the cost of ground launch operations. This system automates both the physical measurement set-up function as well as configuration management documentation. The key element of the system is a self-configuring, self-calibrating, signal-conditioning amplifier that automatically adapts to any sensor to which it is connected. This paper will describe the core technology behind this device and the automated data system in which it has been integrated. The paper will also describe the revolutionary enhancements that are planned for this innovative measurement technology. All measurement electronics devices contain circuitry that, if it fails or degrades, requires the unit to be replaced, adding to the cost of operations. Kennedy Space Center is now developing analog circuits that will be able to detect their own failure and dynamically reconfigure their circuitry to restore themselves to normal operation. This technology will have wide ranging application in all electronic devices used in space and ground systems.
Computational efficiency for the surface renewal method
NASA Astrophysics Data System (ADS)
Kelley, Jason; Higgins, Chad
2018-04-01
Measuring surface fluxes using the surface renewal (SR) method requires programmatic algorithms for tabulation, algebraic calculation, and data quality control. A number of different methods have been published describing automated calibration of SR parameters. Because the SR method utilizes high-frequency (10 Hz+) measurements, some steps in the flux calculation are computationally expensive, especially when automating SR to perform many iterations of these calculations. Several new algorithms were written that perform the required calculations more efficiently and rapidly, and that tested for sensitivity to length of flux averaging period, ability to measure over a large range of lag timescales, and overall computational efficiency. These algorithms utilize signal processing techniques and algebraic simplifications that demonstrate simple modifications that dramatically improve computational efficiency. The results here complement efforts by other authors to standardize a robust and accurate computational SR method. Increased speed of computation time grants flexibility to implementing the SR method, opening new avenues for SR to be used in research, for applied monitoring, and in novel field deployments.
GNAT: A Global Network of Astronomical Telescopes
NASA Astrophysics Data System (ADS)
Crawford, David L.
1995-12-01
Astronomical resources are increasingly directed toward development of very large telescopes, and many facilities are compelled to cease operations of smaller telescopes. A real concern is emerging with respect to issues of access to astronomical imaging systems for the majority of astronomers who will have little or no opportunity to work with the larger telescopes. Further concern is developing with regard to the means for conducting observationally intensive fundamental astronomical imaging programs, such as surveys, monitoring, and standards calibration. One attractive potential solution is a global network of (automated) astronomical telescopes (GNAT). Initial steps have been taken to turn this network into a reality. GNAT has been incorporated as a nonprofit corporation, membership drives have begun and several institutions have joined. The first two open GNAT meetings have now been held to define hardware and software systems, and an order has been placed for the first of the GNAT automated telescopes. In this presentation we discuss the goals and status of GNAT and its implications for astronomical imaging.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zolnierczuk, Piotr A; Vacaliuc, Bogdan; Sundaram, Madhan
The Liquids Reflectometer instrument installed at the Spallation Neutron Source (SNS) enables observations of chemical kinetics, solid-state reactions and phase-transitions of thin film materials at both solid and liquid surfaces. Effective measurement of these behaviors requires each sample to be calibrated dynamically using the neutron beam and the data acquisition system in a feedback loop. Since the SNS is an intense neutron source, the time needed to perform the measurement can be the same as the alignment process, leading to a labor-intensive operation that is exhausting to users. An update to the instrument control system, completed in March 2013, implementedmore » the key features of automated sample alignment and robot-driven sample management, allowing for unattended operation over extended periods, lasting as long as 20 hours. We present a case study of the effort, detailing the mechanical, electrical and software modifications that were made as well as the lessons learned during the integration, verification and testing process.« less
Ji, C.; Helmberger, D.V.; Wald, D.J.
2004-01-01
Slip histories for the 2002 M7.9 Denali fault, Alaska, earthquake are derived rapidly from global teleseismic waveform data. In phases, three models improve matching waveform data and recovery of rupture details. In the first model (Phase I), analogous to an automated solution, a simple fault plane is fixed based on the preliminary Harvard Centroid Moment Tensor mechanism and the epicenter provided by the Preliminary Determination of Epicenters. This model is then updated (Phase II) by implementing a more realistic fault geometry inferred from Digital Elevation Model topography and further (Phase III) by using the calibrated P-wave and SH-wave arrival times derived from modeling of the nearby 2002 M6.7 Nenana Mountain earthquake. These models are used to predict the peak ground velocity and the shaking intensity field in the fault vicinity. The procedure to estimate local strong motion could be automated and used for global real-time earthquake shaking and damage assessment. ?? 2004, Earthquake Engineering Research Institute.
Computation of Flow Through Water-Control Structures Using Program DAMFLO.2
Sanders, Curtis L.; Feaster, Toby D.
2004-01-01
As part of its mission to collect, analyze, and store streamflow data, the U.S. Geological Survey computes flow through several dam structures throughout the country. Flows are computed using hydraulic equations that describe flow through sluice and Tainter gates, crest gates, lock gates, spillways, locks, pumps, and siphons, which are calibrated using flow measurements. The program DAMFLO.2 was written to compute, tabulate, and plot flow through dam structures using data that describe the physical properties of dams and various hydraulic parameters and ratings that use time-varying data, such as lake elevations or gate openings. The program uses electronic computer files of time-varying data, such as lake elevation or gate openings, retrieved from the U.S. Geological Survey Automated Data Processing System. Computed time-varying flow data from DAMFLO.2 are output in flat files, which can be entered into the Automated Data Processing System database. All computations are made in units of feet and seconds. DAMFLO.2 uses the procedures and language developed by the SAS Institute Inc.
Regionalisation of parameters of a large-scale water quality model in Lithuania using PAIC-SWAT
NASA Astrophysics Data System (ADS)
Zarrineh, Nina; van Griensven, Ann; Sennikovs, Juris; Bekere, Liga; Plunge, Svajunas
2015-04-01
To comply with the EU Water Framework Directive, all water bodies need to achieve good ecological status. To reach these goals, the Environmental Protection Agency (AAA) has to elaborate river basin districts management plans and programmes of measures for all catchments in Lithuania. For this purpose, a Soil and Water Assessment Tool (SWAT) model was set up for all Lithuanian catchments using the most recent version of SWAT2012 rev627 implemented and imbedded in a Python workflow by the Center of Processes Analysis and Research (PAIC). The model was calibrated and evaluated using all monitoring data of river discharge, nitrogen and phosphorous concentrations and load. A regionalisation strategy has been set up by identifying 13 hydrological regions according to the runoff formation and hydrological conditions. In each region, a representative catchment was selected and calibrated using a combination of manual and automated calibration techniques. After final parameterization and fulfilling of calibrating and validating evaluation criteria, the same parameters sets have been extrapolated to other catchments within the same hydrological region. Multi variable cal/val strategy was implemented for the following variables: river flow and in-stream NO3, Total Nitrogen, PO4 and Total Phosphorous concentrations. The criteria used for calibration, validation and extrapolation are: Nash-Sutcliffe Efficiency (NSE) for flow and R-squared for water quality variables and PBIAS (percentage bias) for all variables. For the hydrological calibration, NSE values greater than 0.5 should be achieved, while for validation and extrapolation the threshold is respectively 0.4 and 0.3. PBIAS errors have to be less than 20% for calibration and for validation and extrapolation less than 25% and 30%, respectively. In water quality calibration, R-squared should be achieved to 0.5 for calibration and for validation and extrapolation to 0.4 and 0.3 respectively for nitrogen variables. Besides PBIAS error should be less than 40% for calibration, and less than 70% for validation and extrapolation for all mentioned water quality variables. For the flow calibration, daily discharge data for 62 stations were provided for the period 1997-2012. For more than 500 stations, water quality data was provided and 135 data-rich stations was pre-processed in a database containing all observations from 1997-2012. Finally by implementing this regionalisation strategy, the model could satisfactorily predict the selected variables so that in the hydrological part more than 90% of stations fulfilled the criteria and in the water quality part more than 95% of stations fulfilled the criteria. Keywords: Water Quality Modelling, Regionalisation, Parameterization, Nitrogen and Phosphorus Prediction, Calibration, PAIC-SWAT.
Using multiple IMUs in a stacked filter configuration for calibration and fine alignment
NASA Astrophysics Data System (ADS)
El-Osery, Aly; Bruder, Stephen; Wedeward, Kevin
2018-05-01
Determination of a vehicle or person's position and/or orientation is a critical task for a multitude of applications ranging from automated cars and first responders to missiles and fighter jets. Most of these applications rely primarily on global navigation satellite systems, e.g., GPS, which are highly vulnerable to degradation whether by environmental factors or malicious actions. The use of inertial navigation techniques has been shown to provide increased reliability of navigation systems in these situations. Due to advances in MEMS technology and processing capabilities, the use of small and low-cost inertial measurement units (IMUs) are becoming increasingly feasible, which results in small size, weight and power (SWaP) solutions. A known limitation of MEMS IMUs are errors that causes the navigation solution to drift; furthermore, calibration and initialization are challenging tasks. In this paper, we investigate the use of multiple IMUs to aid in calibrating the navigation system and obtaining accurate initialization by performing fine alignment. By using a centralized filter, physical constraints between the multiple IMUs on a rigid body are leveraged to provide relative updates, which in turn aids in the estimation of the individual biases and scale-factors. Developed algorithms will be validated through simulation and actual measurements using low-cost IMUs.
Moore, Christopher; Marchant, Thomas
2017-07-12
Reconstructive volumetric imaging permeates medical practice because of its apparently clear depiction of anatomy. However, the tell tale signs of abnormality and its delineation for treatment demand experts work at the threshold of visibility for hints of structure. Hitherto, a suitable assistive metric that chimes with clinical experience has been absent. This paper develops the complexity measure approximate entropy (ApEn) from its 1D physiological origin into a three-dimensional (3D) algorithm to fill this gap. The first 3D algorithm for this is presented in detail. Validation results for known test arrays are followed by a comparison of fan-beam and cone-beam x-ray computed tomography image volumes used in image guided radiotherapy for cancer. Results show the structural detail down to individual voxel level, the strength of which is calibrated by the ApEn process itself. The potential for application in machine assisted manual interaction and automated image processing and interrogation, including radiomics associated with predictive outcome modeling, is discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marsland, M. G.; Dehnel, M. P.; Theroux, J.
2013-04-19
D-Pace has developed a compact cost-effective gamma detector system based on technology licensed from TRIUMF. These photodiode detectors are convenient for detecting the presence of positron emitting radioisotopes, particularly for the case of transport of radioisotopes from a PET cyclotron to hotlab, or from one location to another in an automated radiochemistry processing unit. This paper describes recent calibration experiments undertaken at the Turku PET Centre for stationary and moving sources of F18 and C11 in standard setups. The practical diagnostic utility of using several of these devices to track the transport of radioisotopes from the cyclotron to hotlab ismore » illustrated. For example, such a detector system provides: a semi-quantitative indication of total activity, speed of transport, location of any activity lost en route and effectiveness of follow-up system flushes, a means of identifying bolus break-up, feedback useful for deciding when to change out tubing.« less
NASA Astrophysics Data System (ADS)
Moore, Christopher; Marchant, Thomas
2017-08-01
Reconstructive volumetric imaging permeates medical practice because of its apparently clear depiction of anatomy. However, the tell tale signs of abnormality and its delineation for treatment demand experts work at the threshold of visibility for hints of structure. Hitherto, a suitable assistive metric that chimes with clinical experience has been absent. This paper develops the complexity measure approximate entropy (ApEn) from its 1D physiological origin into a three-dimensional (3D) algorithm to fill this gap. The first 3D algorithm for this is presented in detail. Validation results for known test arrays are followed by a comparison of fan-beam and cone-beam x-ray computed tomography image volumes used in image guided radiotherapy for cancer. Results show the structural detail down to individual voxel level, the strength of which is calibrated by the ApEn process itself. The potential for application in machine assisted manual interaction and automated image processing and interrogation, including radiomics associated with predictive outcome modeling, is discussed.
Informed peg-in-hole insertion using optical sensors
NASA Astrophysics Data System (ADS)
Paulos, Eric; Canny, John F.
1993-08-01
Peg-in-hole insertion is not only a longstanding problem in robotics but the most common automated mechanical assembly task. In this paper we present a high precision, self-calibrating peg-in-hole insertion strategy using several very simple, inexpensive, and accurate optical sensors. The self-calibrating feature allows us to achieve successful dead-reckoning insertions with tolerances of 25 microns without any accurate initial position information for the robot, pegs, or holes. The program we implemented works for any cylindrical peg, and the sensing steps do not depend on the peg diameter, which the program does not know. The key to the strategy is the use of a fixed sensor to localize both a mobile sensor and the peg, while the mobile sensor localizes the hole. Our strategy is extremely fast, localizing pegs as they are in route to their insertion location without pausing. The result is that insertion times are dominated by the transport time between pick and place operations.
ACCELERATORS: Beam based alignment of the SSRF storage ring
NASA Astrophysics Data System (ADS)
Zhang, Man-Zhou; Li, Hao-Hu; Jiang, Bo-Cheng; Liu, Gui-Min; Li, De-Ming
2009-04-01
There are 140 beam position monitors (BPMs) in the Shanghai Synchrotron Radiation Facility (SSRF) storage ring used for measuring the closed orbit. As the BPM pickup electrodes are assembled directly on the vacuum chamber, it is important to calibrate the electrical center offset of the BPM to an adjacent quadrupole magnetic center. A beam based alignment (BBA) method which varies individual quadrupole magnet strength and observes its effects on the orbit is used to measure the BPM offsets in both the horizontal and vertical planes. It is a completely automated technique with various data processing methods. There are several parameters such as the strength change of the correctors and the quadrupoles which should be chosen carefully in real measurement. After several rounds of BBA measurement and closed orbit correction, these offsets are set to an accuracy better than 10 μm. In this paper we present the method of beam based calibration of BPMs, the experimental results of the SSRF storage ring, and the error analysis.
NASA Astrophysics Data System (ADS)
Kuester, M. A.
2015-12-01
Remote sensing is a powerful tool for monitoring changes on the surface of the Earth at a local or global scale. The use of data sets from different sensors across many platforms, or even a single sensor over time, can bring a wealth of information when exploring anthropogenic changes to the environment. For example, variations in crop yield and health for a specific region can be detected by observing changes in the spectral signature of the particular species under study. However, changes in the atmosphere, sun illumination and viewing geometries during image capture can result in inconsistent image data, hindering automated information extraction. Additionally, an incorrect spectral radiometric calibration will lead to false or misleading results. It is therefore critical that the data being used are normalized and calibrated on a regular basis to ensure that physically derived variables are as close to truth as is possible. Although most earth observing sensors are well-calibrated in a laboratory prior to launch, a change in the radiometric response of the system is inevitable due to thermal, mechanical or electrical effects caused during the rigors of launch or by the space environment itself. Outgassing and exposure to ultra-violet radiation will also have an effect on the sensor's filter responses. Pre-launch lamps and other laboratory calibration systems can also fall short in representing the actual output of the Sun. A presentation of the differences in the results of some example cases (e.g. geology, agriculture) derived for science variables using pre- and post-launch calibration will be presented using DigitalGlobe's WorldView-3 super spectral sensor, with bands in the visible and near infrared, as well as in the shortwave infrared. Important defects caused by an incomplete (i.e. pre-launch only) calibration will be discussed using validation data where available. In addition, the benefits of using a well-validated surface reflectance product will be presented. DigitalGlobe is committed to providing ongoing assessment of the radiometric performance of our sensors, which allows customers to get the most out of our extensive multi-sensor constellation.
NASA Astrophysics Data System (ADS)
Zechner, A.; Stock, M.; Kellner, D.; Ziegler, I.; Keuschnigg, P.; Huber, P.; Mayer, U.; Sedlmayer, F.; Deutschmann, H.; Steininger, P.
2016-11-01
Image guidance during highly conformal radiotherapy requires accurate geometric calibration of the moving components of the imager. Due to limited manufacturing accuracy and gravity-induced flex, an x-ray imager’s deviation from the nominal geometrical definition has to be corrected for. For this purpose a ball bearing phantom applicable for nine degrees of freedom (9-DOF) calibration of a novel cone-beam computed tomography (CBCT) scanner was designed and validated. In order to ensure accurate automated marker detection, as many uniformly distributed markers as possible should be used with a minimum projected inter-marker distance of 10 mm. Three different marker distributions on the phantom cylinder surface were simulated. First, a fixed number of markers are selected and their coordinates are randomly generated. Second, the quasi-random method is represented by setting a constraint on the marker distances in the projections. The third approach generates the ball coordinates helically based on the Golden ratio, ϕ. Projection images of the phantom incorporating the CBCT scanner’s geometry were simulated and analysed with respect to uniform distribution and intra-marker distance. Based on the evaluations a phantom prototype was manufactured and validated by a series of flexmap calibration measurements and analyses. The simulation with randomly distributed markers as well as the quasi-random approach showed an insufficient uniformity of the distribution over the detector area. The best compromise between uniform distribution and a high packing fraction of balls is provided by the Golden section approach. A prototype was manufactured accordingly. The phantom was validated for 9-DOF geometric calibrations of the CBCT scanner with independently moveable source and detector arms. A novel flexmap calibration phantom intended for 9-DOF was developed. The ball bearing distribution based on the Golden section was found to be highly advantageous. The phantom showed satisfying results for calibrations of the CBCT scanner and provides the basis for further flexmap correction and reconstruction developments.
Zechner, A; Stock, M; Kellner, D; Ziegler, I; Keuschnigg, P; Huber, P; Mayer, U; Sedlmayer, F; Deutschmann, H; Steininger, P
2016-11-21
Image guidance during highly conformal radiotherapy requires accurate geometric calibration of the moving components of the imager. Due to limited manufacturing accuracy and gravity-induced flex, an x-ray imager's deviation from the nominal geometrical definition has to be corrected for. For this purpose a ball bearing phantom applicable for nine degrees of freedom (9-DOF) calibration of a novel cone-beam computed tomography (CBCT) scanner was designed and validated. In order to ensure accurate automated marker detection, as many uniformly distributed markers as possible should be used with a minimum projected inter-marker distance of 10 mm. Three different marker distributions on the phantom cylinder surface were simulated. First, a fixed number of markers are selected and their coordinates are randomly generated. Second, the quasi-random method is represented by setting a constraint on the marker distances in the projections. The third approach generates the ball coordinates helically based on the Golden ratio, ϕ. Projection images of the phantom incorporating the CBCT scanner's geometry were simulated and analysed with respect to uniform distribution and intra-marker distance. Based on the evaluations a phantom prototype was manufactured and validated by a series of flexmap calibration measurements and analyses. The simulation with randomly distributed markers as well as the quasi-random approach showed an insufficient uniformity of the distribution over the detector area. The best compromise between uniform distribution and a high packing fraction of balls is provided by the Golden section approach. A prototype was manufactured accordingly. The phantom was validated for 9-DOF geometric calibrations of the CBCT scanner with independently moveable source and detector arms. A novel flexmap calibration phantom intended for 9-DOF was developed. The ball bearing distribution based on the Golden section was found to be highly advantageous. The phantom showed satisfying results for calibrations of the CBCT scanner and provides the basis for further flexmap correction and reconstruction developments.
Development of an Automated Reader for Analysis and Storage of Personnel Dosimeter Badge Data
NASA Technical Reports Server (NTRS)
Meneghelli, B. J.; Hodge, T. R.; Robinson, L. J.; Lueck, D. E.
1997-01-01
The collection and archiving of data from personnel dosimeters has become increasingly important in light of the lowered threshold limit values (TLV) for hydrazine (HZ), monomethylhydrazine (MMH), and unsymmetrical dimethylhydrazine (UDMH). The American Conference of Government Industrial Hygienists (ACGIH) lowered the TLV from 100 parts per billion (ppb) to 10 ppb and has caused increased concern over long term exposures of personnel to trace levels of these hypergols and other potentially harmful chemicals. An automated system of reading the exposure levels of personnel dosimeters and storing exposure data for subsequent evaluation has been developed. The reading of personnel dosimeter badges for exposure lo potentially harmful vapor concentrations of hydrazines or other chemicals is performed visually by comparing the color developed by the badge with a calibrated color comparator. The result obtained using visual comparisons of the developed badge color with the comparator may vary widely from user to user. The automated badge reader takes the variability out of the dosimeter reading by accurately comparing the reflectance obtained from a colored spot on the badge with a reading on the same spot prior to any exposure to chemical vapors. The observed difference between the reflectance values is used as part of a calculation of the dose value for the badge based on a stored calibration curve. The badge reader also stores bar-code data unique to each badge, as well as bar-code information on the user, as part of the permanent badge record. The start and stop exposure times for each badge are recorded and can be used as part of the calculated concentration, in ppm, for each badge logged during a recording period. The badge reader is equipped with a number of badge holders, each of which is unique to a specific type of personnel dosimeter badge. This gives the reader maximum flexibility to allow for the reading of several different types of badges. Test results of the badge reader for several different types of personnel dosimeter badges are presented within the body of this paper.
Development of an Automated Reader for Analysis and Storage of Personnel Dosimeter Badge Data
NASA Technical Reports Server (NTRS)
Meneghelli, B. J.; Hodge, T. R.; Robinson, L. J.; Lueck, D. E.
1997-01-01
The collection and archiving of data from personnel dosimeters has become increasingly important in light of the lowered Threshold Limit Values (TLV) for HydraZine (HZ), MonoMethylHydrazine (MMH), and Unsymmetrical DiMethylHydrazine (UDMH). The American Conference of Government Industrial Hygienists (ACGIH) lowered the TLV from 100 parts per billion (ppb) to IO ppb and has caused increased concern over long term exposures of personnel to trace levels of these hypergols and other potentially harmful chemicals. An automated system of reading the exposure levels of personnel dosimeters and storing exposure data for subsequent evaluation has been developed. The reading of personnel dosimeter badges for exposure to potentially harmful vapor concentrations of hydrazines or other chemicals is performed visually by comparing the color developed by the badge with a calibrated color comparator. The result obtained using visual comparisons of the developed badge color with the comparator may vary widely from user to user. The automated badge reader takes the variability out of the dosimeter reading by accurately comparing the reflectance obtained from a colored spot on the badge with a reading on the same spot prior to any exposure to chemical vapors. The observed difference between the reflectance values is used as part of a calculation of the dose value for the badge based on a stored calibration curve. The badge reader also stores bar-code data unique to each badge, as well as bar-code information on the user, as part of the permanent badge record. The start and stop exposure times for each badge are recorded and can be used as part of the calculated concentration, in ppm, for each badge logged during a recording period. The badge reader is equipped with a number of badge holders, each of which is unique to a specific type of personnel dosimeter badge. This gives the reader maximum flexibility to allow for the reading of several different types of badges. Test results of the badge reader for several different types of personnel dosimeter badges are presented within the body of this paper.
Automated inundation monitoring using TerraSAR-X multitemporal imagery
NASA Astrophysics Data System (ADS)
Gebhardt, S.; Huth, J.; Wehrmann, T.; Schettler, I.; Künzer, C.; Schmidt, M.; Dech, S.
2009-04-01
The Mekong Delta in Vietnam offers natural resources for several million inhabitants. However, a strong population increase, changing climatic conditions and regulatory measures at the upper reaches of the Mekong lead to severe changes in the Delta. Extreme flood events occur more frequently, drinking water availability is increasingly limited, soils show signs of salinization or acidification, species and complete habitats diminish. During the Monsoon season the river regularly overflows its banks in the lower Mekong area, usually with beneficial effects. However, extreme flood events occur more frequently causing extensive damage, on the average once every 6 to 10 years river flood levels exceed the critical beneficial level X-band SAR data are well suited for deriving inundated surface areas. The TerraSAR-X sensor with its different scanning modi allows for the derivation of spatial and temporal high resolved inundation masks. The paper presents an automated procedure for deriving inundated areas from TerraSAR-X Scansar and Stripmap image data. Within the framework of the German-Vietnamese WISDOM project, focussing the Mekong Delta region in Vietnam, images have been acquired covering the flood season from June 2008 to November 2008. Based on these images a time series of the so called watermask showing inundated areas have been derived. The product is required as intermediate to (i) calibrate 2d inundation model scenarios, (ii) estimate the extent of affected areas, and (iii) analyze the scope of prior crisis. The image processing approach is based on the assumption that water surfaces are forward scattering the radar signal resulting in low backscatter signals to the sensor. It uses multiple grey level thresholds and image morphological operations. The approach is robust in terms of automation, accuracy, robustness, and processing time. The resulting watermasks show the seasonal flooding pattern with inundations starting in July, having their peak at the end of September, and lower down until December in 2008. The results are a valuable input for monitoring and understanding the seasonal regional flood patterns for calibrating 2d inundation models, as also for generating value added products in combination with agricultural land use and socio-economic data for further separation of inundated and irrigated areas.
O'Sullivan, Jeanette E; Watson, Roslyn J; Butler, Edward C V
2013-10-15
An automated procedure including both in-line preconcentration and multi-element determination by an inductively coupled plasma mass spectrometer (ICP-MS) has been developed for the determination of Cd, Co, Cu, Ni, Pb and Zn in open-ocean samples. The method relies on flow injection of the sample through a minicolumn of chelating (iminodiacetate) sorbent to preconcentrate the trace metals, while simultaneously eliminating the major cations and anions of seawater. The effectiveness of this step is tested and reliability in results are secured with a rigorous process of quality assurance comprising 36 calibration and reference samples in a run for analysis of 24 oceanic seawaters in a 6-h program. The in-line configuration and procedures presented minimise analyst operations and exposure to contamination. Seawater samples are used for calibration providing a true matrix match. The continuous automated pH measurement registers that chelation occurs within a selected narrow pH range and monitors the consistency of the entire analytical sequence. The eluent (0.8M HNO3) is sufficiently strong to elute the six metals in 39 s at a flow rate of 2.0 mL/min, while being compatible for prolonged use with the mass spectrometer. Throughput is one sample of 7 mL every 6 min. Detection limits were Co 3.2 pM, Ni 23 pM, Cu 46 pM, Zn 71 pM, Cd 2.7 pM and Pb 1.5 pM with coefficients of variation ranging from 3.4% to 8.6% (n=14) and linearity of calibration established beyond the observed concentration range of each trace metal in ocean waters. Recoveries were Co 96.7%, Ni 102%, Cu 102%, Zn 98.1%, Cd 92.2% and Pb 97.6%. The method has been used to analyse ~800 samples from three voyages in the Southern Ocean and Tasman Sea. It has the potential to be extended to other trace elements in ocean waters. © 2013 Elsevier B.V. All rights reserved.
The JPSS Ground Project Algorithm Verification, Test and Evaluation System
NASA Astrophysics Data System (ADS)
Vicente, G. A.; Jain, P.; Chander, G.; Nguyen, V. T.; Dixon, V.
2016-12-01
The Government Resource for Algorithm Verification, Independent Test, and Evaluation (GRAVITE) is an operational system that provides services to the Suomi National Polar-orbiting Partnership (S-NPP) Mission. It is also a unique environment for Calibration/Validation (Cal/Val) and Data Quality Assessment (DQA) of the Join Polar Satellite System (JPSS) mission data products. GRAVITE provides a fast and direct access to the data and products created by the Interface Data Processing Segment (IDPS), the NASA/NOAA operational system that converts Raw Data Records (RDR's) generated by sensors on the S-NPP into calibrated geo-located Sensor Data Records (SDR's) and generates Mission Unique Products (MUPS). It also facilitates algorithm investigation, integration, checkouts and tuning, instrument and product calibration and data quality support, monitoring and data/products distribution. GRAVITE is the portal for the latest S-NPP and JPSS baselined Processing Coefficient Tables (PCT's) and Look-Up-Tables (LUT's) and hosts a number DQA offline tools that takes advantage of the proximity to the near-real time data flows. It also contains a set of automated and ad-hoc Cal/Val tools used for algorithm analysis and updates, including an instance of the IDPS called GRAVITE Algorithm Development Area (G-ADA), that has the latest installation of the IDPS algorithms running in an identical software and hardware platforms. Two other important GRAVITE component are the Investigator-led Processing System (IPS) and the Investigator Computing Facility (ICF). The IPS is a dedicated environment where authorized users run automated scripts called Product Generation Executables (PGE's) to support Cal/Val and data quality assurance offline. This data-rich and data-driven service holds its own distribution system and allows operators to retrieve science data products. The ICF is a workspace where users can share computing applications and resources and have full access to libraries and science and sensor quality analysis tools. In this presentation we will describe the GRAVITE systems and subsystems, architecture, technical specifications, capabilities and resources, distributed data and products and the latest advances to support the JPSS science algorithm implementation, validation and testing.
The ASTRA Spectrophotometer: A Progress Report
NASA Astrophysics Data System (ADS)
Adelman, S. J.; Gulliver, A. F.; Smalley, B.; Pazder, J. S.; Younger, P. F.; Boyd, L.; Epand, D.
2003-12-01
A spectrophotometer with a CCD detector and its automated 0.5-m telescope at the Fairborn Observatory, Washington Camp, AZ are currently under construction. They were designed for efficient operations. By the end of 2004, scientific observations should be in progress. The Citadel ASTRA (Automated Spectrophotometric Telescope Research Associates) Telescope will be able to observe Vega the primary standard, make rapid measurements of the naked-eye stars, use 10 min./hour to obtain photometric measurements of the nightly extinction, and obtain high quality observations of V= 10.5 mag. stars in an hour. This cross-dispersed instrument will have an approximate wavelength range of λ λ 3300-9000 with a resolution of 14 Å in first and 7 Å in second order and except for regions badly affected by telluric lines. At the end of the photometric calibration process, filter photometric magnitudes and indices will be calibrated. Some will serve as quality checks. During the first year of observing a grid of secondary standards will be calibrated differentially with respect to Vega. These stars will also be used to find the nightly extinction. The candidates for this process have been selected from the most stable of the bright secondary stars of the grating scanner era supplemented by the least variable main sequence B0-F0 band stars in Hipparcos photometry and some metal poor stars. Over the lifetime of the instrument, measurements of secondary stars will be used to improve the quality of the secondary standard fluxes. Science observations for major projects such as comparisons with model atmospheres codes and for exploratory investigations should also begin in the first year. The ASTRA team in planning to deal with this potential data flood realize that they will need help to make the best scientific uses of the data. Thus they are interested in discussing possible collaborations. In less than a year of normal observing, all isolated stars in the Bright Star Catalog which can be observed can have their fluxes well measured. ASTRA Contribution 2. This work is supported by NSF grant AST-0115612 to The Citadel.
Automated reconstruction of rainfall events responsible for shallow landslides
NASA Astrophysics Data System (ADS)
Vessia, G.; Parise, M.; Brunetti, M. T.; Peruccacci, S.; Rossi, M.; Vennari, C.; Guzzetti, F.
2014-04-01
Over the last 40 years, many contributions have been devoted to identifying the empirical rainfall thresholds (e.g. intensity vs. duration ID, cumulated rainfall vs. duration ED, cumulated rainfall vs. intensity EI) for the initiation of shallow landslides, based on local as well as worldwide inventories. Although different methods to trace the threshold curves have been proposed and discussed in literature, a systematic study to develop an automated procedure to select the rainfall event responsible for the landslide occurrence has rarely been addressed. Nonetheless, objective criteria for estimating the rainfall responsible for the landslide occurrence (effective rainfall) play a prominent role on the threshold values. In this paper, two criteria for the identification of the effective rainfall events are presented: (1) the first is based on the analysis of the time series of rainfall mean intensity values over one month preceding the landslide occurrence, and (2) the second on the analysis of the trend in the time function of the cumulated mean intensity series calculated from the rainfall records measured through rain gauges. The two criteria have been implemented in an automated procedure written in R language. A sample of 100 shallow landslides collected in Italy by the CNR-IRPI research group from 2002 to 2012 has been used to calibrate the proposed procedure. The cumulated rainfall E and duration D of rainfall events that triggered the documented landslides are calculated through the new procedure and are fitted with power law in the (D,E) diagram. The results are discussed by comparing the (D,E) pairs calculated by the automated procedure and the ones by the expert method.
Initial Results from the Bloomsburg University Goniometer Laboratory
NASA Technical Reports Server (NTRS)
Shepard, M. K.
2002-01-01
The Bloomsburg University Goniometer Laboratory (B.U.G. Lab) consists of three systems for studying the photometric properties of samples. The primary system is an automated goniometer capable of measuring the entire bi-directional reflectance distribution function (BRDF) of samples. Secondary systems include a reflectance spectrometer and digital video camera with macro zoom lens for characterizing and documenting other physical properties of measured samples. Works completed or in progress include the characterization of the BRDF of calibration surfaces for the 2003 Mars Exploration Rovers (MER03), Martian analog soils including JSC-Mars-1, and tests of photometric models.
Cavity ring down spectrometry for disease diagnostics using exhaled air
NASA Astrophysics Data System (ADS)
Revalde, G.; Grundšteins, K.; Alnis, J.; Skudra, A.
2017-12-01
In this paper we report the current stage of the development of a cavity ring-down spectrometer (CRDS) system using exhaled human breath analysis for the diagnostics of different diseases like diabetes and later lung cancer. The portable CRDS system is made in ultraviolet spectral region using Nd:Yag laser 266 nm pulsed light. Calibration of the CRDS system was performed using generated samples by KinTek automated permeation tube system and self-prepared mixtures with known concentration of benzene and acetone in air. First experiments showed that the limits of detection for benzene and acetone are several tens of ppb.
NASA Technical Reports Server (NTRS)
Cooke, W. J.; Brown, P. G.; Stober, G.; Schult, C.; Krzeminski, Z.; Chau, J. L.
2017-01-01
We describe a two year campaign of simultaneous automated meteor optical and head echo radar measurements conducted with the Middle Atmosphere Alomar Radar System (MAARSY). This campaign was established with the following goals: Compare trajectories as measured by MAARSY and the two optical stations for a range of meteoroid masses. Compare photometric and dynamic mass measured optically with radar-derived masses (inter-calibration of mass scales). Use the best observed simultaneous events to fuse all metric, photometric and ionization estimates together and apply different ablation models to self-consistently model these highest quality events.
NASA Astrophysics Data System (ADS)
Matula, Svatopluk; Dolezal, Frantisek; Moreira Barradas, Joao Manuel
2015-04-01
The electromagnetic soil water content sensors are invaluable tools because of their selective sensitivity to water, versatility, ease of automation and large resolution. A common drawback of most their types is their preferential sensitivity to water near to their surfaces. The ways in which the drawback manifests itself were explored for the case of large Time-Domain Reflectometry (TDR) sensors Aqua-Tel-TDR (Automata, Inc., now McCrometer CONNECT). Their field performance was investigated and compared with the results of field and laboratory calibration. The field soil was loamy Chernozem on a carbonate-rich loess substrate, while the laboratory calibration was done in fine quartz sand. In the field, the sensors were installed horizontally into pre-bored holes after being wrapped in slurry of native soil or fine earth. Large sensor-to-sensor variability of readings was observed. It was partially removed by field calibration. The occurrence of percolation events could be easily recognised, because they made the TDR readings suddenly rising and sometimes considerably exceeding the saturated water content. After the events, the TDR readings fell, usually equally suddenly, remaining afterwards at the levels somewhat higher than those before the event. These phenomena can be explained by the preferential flow of water in natural and artificial soil macropores around the sensors. It is hypothesised that the percolating water which enters the gaps and other voids around the sensors accumulates there for short time, being hindered by the sensors themselves. This water also has a enlarged opportunity to get absorbed by the adjacent soil matrix. The variance of TDR readings obtained during the field calibration does not differ significantly from the variance of the corresponding gravimetric sampling data. This suggests that the slope of the field calibration equation is close to unity, in contrast to the laboratory calibration in quartz sand. This difference in slopes can be explained by the presence or absence, respectively, of gaps around the sensors. A typical percolation event and dry period records are presented and analysed. Sensors of this type can be used for qualitative detection of preferential flow and perhaps also for its quantification. The readings outside the percolation events indicate that the sensor environment imitates the native soil reasonably well and that the field-calibrated sensors can provide us with quantitative information about the actual soil water content.
Litzenberg, Dale W; Gallagher, Ian; Masi, Kathryn J; Lee, Choonik; Prisciandaro, Joann I; Hamstra, Daniel A; Ritter, Timothy; Lam, Kwok L
2013-08-01
To present and characterize a measurement technique to quantify the calibration accuracy of an electromagnetic tracking system to radiation isocenter. This technique was developed as a quality assurance method for electromagnetic tracking systems used in a multi-institutional clinical hypofractionated prostate study. In this technique, the electromagnetic tracking system is calibrated to isocenter with the manufacturers recommended technique, using laser-based alignment. A test patient is created with a transponder at isocenter whose position is measured electromagnetically. Four portal images of the transponder are taken with collimator rotations of 45° 135°, 225°, and 315°, at each of four gantry angles (0°, 90°, 180°, 270°) using a 3×6 cm2 radiation field. In each image, the center of the copper-wrapped iron core of the transponder is determined. All measurements are made relative to this transponder position to remove gantry and imager sag effects. For each of the 16 images, the 50% collimation edges are identified and used to find a ray representing the rotational axis of each collimation edge. The 16 collimator rotation rays from four gantry angles pass through and bound the radiation isocenter volume. The center of the bounded region, relative to the transponder, is calculated and then transformed to tracking system coordinates using the transponder position, allowing the tracking system's calibration offset from radiation isocenter to be found. All image analysis and calculations are automated with inhouse software for user-independent accuracy. Three different tracking systems at two different sites were evaluated for this study. The magnitude of the calibration offset was always less than the manufacturer's stated accuracy of 0.2 cm using their standard clinical calibration procedure, and ranged from 0.014 to 0.175 cm. On three systems in clinical use, the magnitude of the offset was found to be 0.053±0.036, 0.121±0.023, and 0.093±0.013 cm. The method presented here provides an independent technique to verify the calibration of an electromagnetic tracking system to radiation isocenter. The calibration accuracy of the system was better than the 0.2 cm accuracy stated by the manufacturer. However, it should not be assumed to be zero, especially for stereotactic radiation therapy treatments where planning target volume margins are very small.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fisk, Mark D.; Pasyanos, Michael E.
Characterizing regional seismic signals continues to be a difficult problem due to their variability. Calibration of these signals is very important to many aspects of monitoring underground nuclear explosions, including detecting seismic signals, discriminating explosions from earthquakes, and reliably estimating magnitude and yield. Amplitude tomography, which simultaneously inverts for source, propagation, and site effects, is a leading method of calibrating these signals. A major issue in amplitude tomography is the data quality of the input amplitude measurements. Pre-event and prephase signal-to-noise ratio (SNR) tests are typically used but can frequently include bad signals and exclude good signals. The deficiencies ofmore » SNR criteria, which are demonstrated here, lead to large calibration errors. To ameliorate these issues, we introduce a semi-automated approach to assess the bandwidth of a spectrum where it behaves physically. We determine the maximum frequency (denoted as F max) where it deviates from this behavior due to inflections at which noise or spurious signals start to bias the spectra away from the expected decay. We compare two amplitude tomography runs using the SNR and new F max criteria and show significant improvements to the stability and accuracy of the tomography output for frequency bands higher than 2 Hz by using our assessments of valid S-wave bandwidth. We compare Q estimates, P/S residuals, and some detailed results to explain the improvements. Lastly, for frequency bands higher than 4 Hz, needed for effective P/S discrimination of explosions from earthquakes, the new bandwidth criteria sufficiently fix the instabilities and errors so that the residuals and calibration terms are useful for application.« less
Masalski, Marcin; Kipiński, Lech; Grysiński, Tomasz; Kręcicki, Tomasz
2016-05-30
Hearing tests carried out in home setting by means of mobile devices require previous calibration of the reference sound level. Mobile devices with bundled headphones create a possibility of applying the predefined level for a particular model as an alternative to calibrating each device separately. The objective of this study was to determine the reference sound level for sets composed of a mobile device and bundled headphones. Reference sound levels for Android-based mobile devices were determined using an open access mobile phone app by means of biological calibration, that is, in relation to the normal-hearing threshold. The examinations were conducted in 2 groups: an uncontrolled and a controlled one. In the uncontrolled group, the fully automated self-measurements were carried out in home conditions by 18- to 35-year-old subjects, without prior hearing problems, recruited online. Calibration was conducted as a preliminary step in preparation for further examination. In the controlled group, audiologist-assisted examinations were performed in a sound booth, on normal-hearing subjects verified through pure-tone audiometry, recruited offline from among the workers and patients of the clinic. In both the groups, the reference sound levels were determined on a subject's mobile device using the Bekesy audiometry. The reference sound levels were compared between the groups. Intramodel and intermodel analyses were carried out as well. In the uncontrolled group, 8988 calibrations were conducted on 8620 different devices representing 2040 models. In the controlled group, 158 calibrations (test and retest) were conducted on 79 devices representing 50 models. Result analysis was performed for 10 most frequently used models in both the groups. The difference in reference sound levels between uncontrolled and controlled groups was 1.50 dB (SD 4.42). The mean SD of the reference sound level determined for devices within the same model was 4.03 dB (95% CI 3.93-4.11). Statistically significant differences were found across models. Reference sound levels determined in the uncontrolled group are comparable to the values obtained in the controlled group. This validates the use of biological calibration in the uncontrolled group for determining the predefined reference sound level for new devices. Moreover, due to a relatively small deviation of the reference sound level for devices of the same model, it is feasible to conduct hearing screening on devices calibrated with the predefined reference sound level.
Calibration and validation of rockfall models
NASA Astrophysics Data System (ADS)
Frattini, Paolo; Valagussa, Andrea; Zenoni, Stefania; Crosta, Giovanni B.
2013-04-01
Calibrating and validating landslide models is extremely difficult due to the particular characteristic of landslides: limited recurrence in time, relatively low frequency of the events, short durability of post-event traces, poor availability of continuous monitoring data, especially for small landslide and rockfalls. For this reason, most of the rockfall models presented in literature completely lack calibration and validation of the results. In this contribution, we explore different strategies for rockfall model calibration and validation starting from both an historical event and a full-scale field test. The event occurred in 2012 in Courmayeur (Western Alps, Italy), and caused serious damages to quarrying facilities. This event has been studied soon after the occurrence through a field campaign aimed at mapping the blocks arrested along the slope, the shape and location of the detachment area, and the traces of scars associated to impacts of blocks on the slope. The full-scale field test was performed by Geovert Ltd in the Christchurch area (New Zealand) after the 2011 earthquake. During the test, a number of large blocks have been mobilized from the upper part of the slope and filmed with high velocity cameras from different viewpoints. The movies of each released block were analysed to identify the block shape, the propagation path, the location of impacts, the height of the trajectory and the velocity of the block along the path. Both calibration and validation of rockfall models should be based on the optimization of the agreement between the actual trajectories or location of arrested blocks and the simulated ones. A measure that describe this agreement is therefore needed. For calibration purpose, this measure should simple enough to allow trial and error repetitions of the model for parameter optimization. In this contribution we explore different calibration/validation measures: (1) the percentage of simulated blocks arresting within a buffer of the actual blocks, (2) the percentage of trajectories passing through the buffer of the actual rockfall path, (3) the mean distance between the location of arrest of each simulated blocks and the location of the nearest actual blocks; (4) the mean distance between the location of detachment of each simulated block and the location of detachment of the actual block located closer to the arrest position. By applying the four measures to the case studies, we observed that all measures are able to represent the model performance for validation purposes. However, the third measure is more simple and reliable than the others, and seems to be optimal for model calibration, especially when using a parameter estimation and optimization modelling software for automated calibration.
WASS: An open-source pipeline for 3D stereo reconstruction of ocean waves
NASA Astrophysics Data System (ADS)
Bergamasco, Filippo; Torsello, Andrea; Sclavo, Mauro; Barbariol, Francesco; Benetazzo, Alvise
2017-10-01
Stereo 3D reconstruction of ocean waves is gaining more and more popularity in the oceanographic community and industry. Indeed, recent advances of both computer vision algorithms and computer processing power now allow the study of the spatio-temporal wave field with unprecedented accuracy, especially at small scales. Even if simple in theory, multiple details are difficult to be mastered for a practitioner, so that the implementation of a sea-waves 3D reconstruction pipeline is in general considered a complex task. For instance, camera calibration, reliable stereo feature matching and mean sea-plane estimation are all factors for which a well designed implementation can make the difference to obtain valuable results. For this reason, we believe that the open availability of a well tested software package that automates the reconstruction process from stereo images to a 3D point cloud would be a valuable addition for future researches in this area. We present WASS (http://www.dais.unive.it/wass), an Open-Source stereo processing pipeline for sea waves 3D reconstruction. Our tool completely automates all the steps required to estimate dense point clouds from stereo images. Namely, it computes the extrinsic parameters of the stereo rig so that no delicate calibration has to be performed on the field. It implements a fast 3D dense stereo reconstruction procedure based on the consolidated OpenCV library and, lastly, it includes set of filtering techniques both on the disparity map and the produced point cloud to remove the vast majority of erroneous points that can naturally arise while analyzing the optically complex nature of the water surface. In this paper, we describe the architecture of WASS and the internal algorithms involved. The pipeline workflow is shown step-by-step and demonstrated on real datasets acquired at sea.
NASA Technical Reports Server (NTRS)
Sekula, Martin K.
2012-01-01
Projection moir interferometry (PMI) was employed to measure blade deflections during a hover test of a generic model-scale rotor in the NASA Langley 14x22 subsonic wind tunnel s hover facility. PMI was one of several optical measurement techniques tasked to acquire deflection and flow visualization data for a rotor at several distinct heights above a ground plane. Two of the main objectives of this test were to demonstrate that multiple optical measurement techniques can be used simultaneously to acquire data and to identify and address deficiencies in the techniques. Several PMI-specific technical challenges needed to be addressed during the test and in post-processing of the data. These challenges included developing an efficient and accurate calibration method for an extremely large (65 inch) height range; automating the analysis of the large amount of data acquired during the test; and developing a method to determinate the absolute displacement of rotor blades without a required anchor point measurement. The results indicate that the use of a single-camera/single-projector approach for the large height range reduced the accuracy of the PMI system compared to PMI systems designed for smaller height ranges. The lack of the anchor point measurement (due to a technical issue with one of the other measurement techniques) limited the ability of the PMI system to correctly measure blade displacements to only one of the three rotor heights tested. The new calibration technique reduced the data required by 80 percent while new post-processing algorithms successfully automated the process of locating rotor blades in images, determining the blade quarter chord location, and calculating the blade root and blade tip heights above the ground plane.
NASA Astrophysics Data System (ADS)
Shirley, Matthew Richard
I analyzed seismic data from the Ozarks-Illinois-Indiana-Kentucky (OIINK) seismic experiment that operated in eastern Missouri, southern Illinois, southern Indiana, and Kentucky from July 2012 through March 2015. A product of this analysis is a new catalog of earthquake locations and magnitudes for small-magnitude local events during this study period. The analysis included a pilot study involving detailed manual analysis of all events in a ten-day test period and determination of the best parameters for a suite of automated detection and location programs. I eliminated events that were not earthquakes (mostly quarry and surface mine blasts) from the output of the automated programs, and reprocessed the locations for the earthquakes with manually picked P- and S-wave arrivals. This catalog consists of earthquake locations, depths, and local magnitudes. The new catalog consists of 147 earthquake locations, including 19 located within the bounds of the OIINK array. Of these events, 16 were newly reported events, too small to be reported in the Center for Earthquake Research and Information (CERI) regional seismic network catalog. I compared the magnitudes reported by CERI for corresponding earthquakes to establish a magnitude calibration factor for all earthquakes recorded by the OIINK array. With the calibrated earthquake magnitudes, I incorporate the previous OIINK results from Yang et al. (2014) to create magnitude-frequency distributions for the seismic zones in the region alongside the magnitude-frequency distributions made from CERI data. This shows that Saint Genevieve and Wabash Valley seismic zones experience seismic activity at an order magnitude lower rate than the New Madrid seismic zone, and the Rough Creek Graben experiences seismic activity two orders of magnitude less frequently than New Madrid.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jenkins, C; Xing, L; Fahimian, B
Purpose: Accuracy of positioning, timing and activity is of critical importance for High Dose Rate (HDR) brachytherapy delivery. Respective measurements via film autoradiography, stop-watches and well chambers can be cumbersome, crude or lack dynamic source evaluation capabilities. To address such limitations, a single device radioluminescent detection system enabling automated real-time quantification of activity, position and timing accuracy is presented and experimentally evaluated. Methods: A radioluminescent sheet was fabricated by mixing Gd?O?S:Tb with PDMS and incorporated into a 3D printed device where it was fixated below a CMOS digital camera. An Ir-192 HDR source (VS2000, VariSource iX) with an effective activemore » length of 5 mm was introduced using a 17-gauge stainless steel needle below the sheet. Pixel intensity values for determining activity were taken from an ROI centered on the source location. A calibration curve relating intensity values to activity was generated and used to evaluate automated activity determination with data gathered over 6 weeks. Positioning measurements were performed by integrating images for an entire delivery and fitting peaks to the resulting profile. Timing measurements were performed by evaluating source location and timestamps from individual images. Results: Average predicted activity error over 6 weeks was .35 ± .5%. The distance between four dwell positions was determined by the automated system to be 1.99 ± .02 cm. The result from autoradiography was 2.00 ± .03 cm. The system achieved a time resolution of 10 msec and determined the dwell time to be 1.01 sec ± .02 sec. Conclusion: The system was able to successfully perform automated detection of activity, positioning and timing concurrently under a single setup. Relative to radiochromic and radiographic film-based autoradiography, which can only provide a static evaluation positioning, optical detection of temporary radiation induced luminescence enables dynamic detection of position enabling automated quantification of timing with millisecond accuracy.« less
Lapsley Miller, Judi A; Reed, Charlotte M; Robinson, Sarah R; Perez, Zachary D
2018-02-21
Clinical pure-tone audiometry is conducted using stimuli delivered through supra-aural headphones or insert earphones. The stimuli are calibrated in an acoustic (average ear) coupler. Deviations in individual-ear acoustics from the coupler acoustics affect test validity, and variations in probe insertion and headphone placement affect both test validity and test-retest reliability. Using an insert earphone designed for otoacoustic emission testing, which contains a microphone and loudspeaker, an individualized in-the-ear calibration can be calculated from the ear-canal sound pressure measured at the microphone. However, the total sound pressure level (SPL) measured at the microphone may be affected by standing-wave nulls at higher frequencies, producing errors in stimulus level of up to 20 dB. An alternative is to calibrate using the forward pressure level (FPL) component, which is derived from the total SPL using a wideband acoustic immittance measurement, and represents the pressure wave incident on the eardrum. The objective of this study is to establish test-retest reliability for FPL calibration of pure-tone audiometry stimuli, compared with in-the-ear and coupler sound pressure calibrations. The authors compared standard audiometry using a modern clinical audiometer with TDH-39P supra-aural headphones calibrated in a coupler to a prototype audiometer with an ER10C earphone calibrated three ways: (1) in-the-ear using the total SPL at the microphone, (2) in-the-ear using the FPL at the microphone, and (3) in a coupler (all three are derived from the same measurement). The test procedure was similar to that commonly used in hearing-conservation programs, using pulsed-tone test frequencies at 0.5, 1, 2, 3, 4, 6, and 8 kHz, and an automated modified Hughson-Westlake audiometric procedure. Fifteen adult human participants with normal to mildly-impaired hearing were selected, and one ear from each was tested. Participants completed 10 audiograms on each system, with test-order randomly varied and with headphones and earphones refitted by the tester between tests. Fourteen of 15 ears had standing-wave nulls present between 4 and 8 kHz. The mean intrasubject SD at 6 and 8 kHz was lowest for the FPL calibration, and was comparable with the low-frequency reliability across calibration methods. This decrease in variability translates to statistically-derived significant threshold shift criteria indicating that 15 dB shifts in hearing can be reliably detected at 6 and 8 kHz using FPL-calibrated ER10C earphones, compared with 20 to 25 dB shifts using standard TDH-39P headphones with a coupler calibration. These results indicate that reliability is better with insert earphones, especially with in-the-ear FPL calibration, compared with a standard clinical audiometer with supra-aural headphones. However, in-the-ear SPL calibration should not be used due to its sensitivity to standing waves. The improvement in reliability is clinically meaningful, potentially allowing hearing-conservation programs to more confidently determine significant threshold shifts at 6 kHz-a key frequency for the early detection of noise-induced hearing loss.
NASA Astrophysics Data System (ADS)
Orlando, José Ignacio; Fracchia, Marcos; del Río, Valeria; del Fresno, Mariana
2017-11-01
Several ophthalmological and systemic diseases are manifested through pathological changes in the properties and the distribution of the retinal blood vessels. The characterization of such alterations requires the segmentation of the vasculature, which is a tedious and time-consuming task that is infeasible to be performed manually. Numerous attempts have been made to propose automated methods for segmenting the retinal vasculature from fundus photographs, although their application in real clinical scenarios is usually limited by their ability to deal with images taken at different resolutions. This is likely due to the large number of parameters that have to be properly calibrated according to each image scale. In this paper we propose to apply a novel strategy for automated feature parameter estimation, combined with a vessel segmentation method based on fully connected conditional random fields. The estimation model is learned by linear regression from structural properties of the images and known optimal configurations, that were previously obtained for low resolution data sets. Our experiments in high resolution images show that this approach is able to estimate appropriate configurations that are suitable for performing the segmentation task without requiring to re-engineer parameters. Furthermore, our combined approach reported state of the art performance on the benchmark data set HRF, as measured in terms of the F1-score and the Matthews correlation coefficient.
Nasso, Sara; Goetze, Sandra; Martens, Lennart
2015-09-04
Selected reaction monitoring (SRM) MS is a highly selective and sensitive technique to quantify protein abundances in complex biological samples. To enhance the pace of SRM large studies, a validated, robust method to fully automate absolute quantification and to substitute for interactive evaluation would be valuable. To address this demand, we present Ariadne, a Matlab software. To quantify monitored targets, Ariadne exploits metadata imported from the transition lists, and targets can be filtered according to mProphet output. Signal processing and statistical learning approaches are combined to compute peptide quantifications. To robustly estimate absolute abundances, the external calibration curve method is applied, ensuring linearity over the measured dynamic range. Ariadne was benchmarked against mProphet and Skyline by comparing its quantification performance on three different dilution series, featuring either noisy/smooth traces without background or smooth traces with complex background. Results, evaluated as efficiency, linearity, accuracy, and precision of quantification, showed that Ariadne's performance is independent of data smoothness and complex background presence and that Ariadne outperforms mProphet on the noisier data set and improved 2-fold Skyline's accuracy and precision for the lowest abundant dilution with complex background. Remarkably, Ariadne could statistically distinguish from each other all different abundances, discriminating dilutions as low as 0.1 and 0.2 fmol. These results suggest that Ariadne offers reliable and automated analysis of large-scale SRM differential expression studies.
Analysis of Four Automated Urinalysis Systems Compared to Reference Methods.
Bartosova, Kamila; Kubicek, Zdenek; Franekova, Janka; Louzensky, Gustav; Lavrikova, Petra; Jabor, Antonin
2016-11-01
The aim of this study was to compare four automated urinalysis systems: the Iris iQ200 Sprint (Iris Diagnostics, U.S.A.) combined with the Arkray AUTION MAX AX 4030, Iris + AUTION, Arkray AU 4050 (Arkray Global Business, Inc., Japan), Dirui FUS 2000 (Dirui Industrial Co., P.R.C.), and Menarini sediMAX (Menarini, Italy). Urine concentrations of protein and glucose (Iris, Dirui) were compared using reference quantitative analysis on an Abbott Architect c16000. Leukocytes, erythrocytes, epithelia, and casts (Iris, Arkray, Diuri, Menarini) were compared to urine sediment under reference light microscopy, Leica DM2000 (Leica Microsystems GmbH, Germany) with calibrated FastRead plates (Biosigma S.r.l., Italy), using both native and stained preparations. Total protein and glucose levels were measured using the Iris + AUTION system with borderline trueness, while the Dirui analysis revealed worse performances for the protein and glucose measurements. True classifications of leukocytes and erythrocytes were above 85% and 72%, respectively. Kappa statistics revealed a nearly perfect evaluation of leukocytes for all tested systems; the erythrocyte evaluation was nearly perfect for the Iris, Dirui and Arkray analyzers and substantial for the Menarini analyzer. The epithelia identification was connected to high false negativity (above 15%) in the Iris, Arkray, and Menarini analyses. False-negative casts were above 70% for all tested systems. The use of automated urinalysis demonstrated some weaknesses and should be checked by experienced laboratory staff using light microscopy.
Theanponkrang, Somjai; Suginta, Wipa; Weingart, Helge; Winterhalter, Mathias; Schulte, Albert
2015-01-01
A new automated pharmacoanalytical technique for convenient quantification of redox-active antibiotics has been established by combining the benefits of a carbon nanotube (CNT) sensor modification with electrocatalytic activity for analyte detection with the merits of a robotic electrochemical device that is capable of sequential nonmanual sample measurements in 24-well microtiter plates. Norfloxacin (NFX) and ciprofloxacin (CFX), two standard fluoroquinolone antibiotics, were used in automated calibration measurements by differential pulse voltammetry (DPV) and accomplished were linear ranges of 1–10 μM and 2–100 μM for NFX and CFX, respectively. The lowest detectable levels were estimated to be 0.3±0.1 μM (n=7) for NFX and 1.6±0.1 μM (n=7) for CFX. In standard solutions or tablet samples of known content, both analytes could be quantified with the robotic DPV microtiter plate assay, with recoveries within ±4% of 100%. And recoveries were as good when NFX was evaluated in human serum samples with added NFX. The use of simple instrumentation, convenience in execution, and high effectiveness in analyte quantitation suggest the merger between automated microtiter plate voltammetry and CNT-supported electrochemical drug detection as a novel methodology for antibiotic testing in pharmaceutical and clinical research and quality control laboratories. PMID:25670899
A Ground Systems Template for Remote Sensing Systems
NASA Astrophysics Data System (ADS)
McClanahan, Timothy P.; Trombka, Jacob I.; Floyd, Samuel R.; Truskowski, Walter; Starr, Richard D.; Clark, Pamela E.; Evans, Larry G.
2002-10-01
Spaceborne remote sensing using gamma and X-ray spectrometers requires particular attention to the design and development of reliable systems. These systems must ensure the scientific requirements of the mission within the challenging technical constraints of operating instrumentation in space. The Near Earth Asteroid Rendezvous (NEAR) spacecraft included X-ray and gamma-ray spectrometers (XGRS), whose mission was to map the elemental chemistry of the 433 Eros asteroid. A remote sensing system template, similar to a blackboard systems approach used in artificial intelligence, was identified in which the spacecraft, instrument, and ground system was designed and developed to monitor and adapt to evolving mission requirements in a complicated operational setting. Systems were developed for ground tracking of instrument calibration, instrument health, data quality, orbital geometry, solar flux as well as models of the asteroid's surface characteristics, requiring an intensive human effort. In the future, missions such as the Autonomous Nano-Technology Swarm (ANTS) program will have to rely heavily on automation to collectively encounter and sample asteroids in the outer asteroid belt. Using similar instrumentation, ANTS will require information similar to data collected by the NEAR X-ray/Gamma-Ray Spectrometer (XGRS) ground system for science and operations management. The NEAR XGRS systems will be studied to identify the equivalent subsystems that may be automated for ANTS. The effort will also investigate the possibility of applying blackboard style approaches to automated decision making required for ANTS.
Yera, H.; Filisetti, D.; Bastien, P.; Ancelle, T.; Thulliez, P.; Delhaes, L.
2009-01-01
Over the past few years, a number of new nucleic acid extraction methods and extraction platforms using chemistry combined with magnetic or silica particles have been developed, in combination with instruments to facilitate the extraction procedure. The objective of the present study was to investigate the suitability of these automated methods for the isolation of Toxoplasma gondii DNA from amniotic fluid (AF). Therefore, three automated procedures were compared to two commercialized manual extraction methods. The MagNA Pure Compact (Roche), BioRobot EZ1 (Qiagen), and easyMAG (bioMérieux) automated procedures were compared to two manual DNA extraction kits, the QIAamp DNA minikit (Qiagen) and the High Pure PCR template preparation kit (Roche). Evaluation was carried out with two specific Toxoplasma PCRs (targeting the 529-bp repeat element), inhibitor search PCRs, and human beta-globin PCRs. The samples each consisted of 4 ml of AF with or without a calibrated Toxoplasma gondii RH strain suspension (0, 1, 2.5, 5, and 25 tachyzoites/ml). All PCR assays were laboratory-developed real-time PCR assays, using either TaqMan or fluorescent resonance energy transfer probes. A total of 1,178 PCRs were performed, including 978 Toxoplasma PCRs. The automated and manual methods were similar in sensitivity for DNA extraction from T. gondii at the highest concentration (25 Toxoplasma gondii cells/ml). However, our results showed that the DNA extraction procedures led to variable efficacy in isolating low concentrations of tachyzoites in AF samples (<5 Toxoplasma gondii cells/ml), a difference that might have repercussions since low parasite concentrations in AF exist and can lead to congenital toxoplasmosis. PMID:19846633
NASA Technical Reports Server (NTRS)
Pham, Timothy T.; Machuzak, Richard J.; Bedrossian, Alina; Kelly, Richard M.; Liao, Jason C.
2012-01-01
This software provides an automated capability to measure and qualify the frequency stability performance of the Deep Space Network (DSN) ground system, using daily spacecraft tracking data. The results help to verify if the DSN performance is meeting its specification, therefore ensuring commitments to flight missions; in particular, the radio science investigations. The rich set of data also helps the DSN Operations and Maintenance team to identify the trends and patterns, allowing them to identify the antennas of lower performance and implement corrective action in a timely manner. Unlike the traditional approach where the performance can only be obtained from special calibration sessions that are both time-consuming and require manual setup, the new method taps into the daily spacecraft tracking data. This new approach significantly increases the amount of data available for analysis, roughly by two orders of magnitude, making it possible to conduct trend analysis with good confidence. The software is built with automation in mind for end-to-end processing. From the inputs gathering to computation analysis and later data visualization of the results, all steps are done automatically, making the data production at near zero cost. This allows the limited engineering resource to focus on high-level assessment and to follow up with the exceptions/deviations. To make it possible to process the continual stream of daily incoming data without much effort, and to understand the results quickly, the processing needs to be automated and the data summarized at a high level. Special attention needs to be given to data gathering, input validation, handling anomalous conditions, computation, and presenting the results in a visual form that makes it easy to spot items of exception/ deviation so that further analysis can be directed and corrective actions followed.
NASA Technical Reports Server (NTRS)
Pham, Timothy T.; Machuzak, Richard J.; Bedrossian, Alina; Kelly, Richard M.; Liao, Jason C.
2012-01-01
This software provides an automated capability to measure and qualify the frequency stability performance of the Deep Space Network (DSN) ground system, using daily spacecraft tracking data. The results help to verify if the DSN performance is meeting its specification, therefore ensuring commitments to flight missions; in particular, the radio science investigations. The rich set of data also helps the DSN Operations and Maintenance team to identify the trends and patterns, allowing them to identify the antennas of lower performance and implement corrective action in a timely manner. Unlike the traditional approach where the performance can only be obtained from special calibration sessions that are both time-consuming and require manual setup, the new method taps into the daily spacecraft tracking data. This new approach significantly increases the amount of data available for analysis, roughly by two orders of magnitude, making it possible to conduct trend analysis with good confidence. The software is built with automation in mind for end-to-end processing. From the inputs gathering to computation analysis and later data visualization of the results, all steps are done automatically, making the data production at near zero cost. This allows the limited engineering resource to focus on high-level assessment and to follow up with the exceptions/deviations. To make it possible to process the continual stream of daily incoming data without much effort, and to understand the results quickly, the processing needs to be automated and the data summarized at a high level. Special attention needs to be given to data gathering, input validation, handling anomalous conditions, computation, and presenting the results in a visual form that makes it easy to spot items of exception/deviation so that further analysis can be directed and corrective actions followed.
EasyLCMS: an asynchronous web application for the automated quantification of LC-MS data
2012-01-01
Background Downstream applications in metabolomics, as well as mathematical modelling, require data in a quantitative format, which may also necessitate the automated and simultaneous quantification of numerous metabolites. Although numerous applications have been previously developed for metabolomics data handling, automated calibration and calculation of the concentrations in terms of μmol have not been carried out. Moreover, most of the metabolomics applications are designed for GC-MS, and would not be suitable for LC-MS, since in LC, the deviation in the retention time is not linear, which is not taken into account in these applications. Moreover, only a few are web-based applications, which could improve stand-alone software in terms of compatibility, sharing capabilities and hardware requirements, even though a strong bandwidth is required. Furthermore, none of these incorporate asynchronous communication to allow real-time interaction with pre-processed results. Findings Here, we present EasyLCMS (http://www.easylcms.es/), a new application for automated quantification which was validated using more than 1000 concentration comparisons in real samples with manual operation. The results showed that only 1% of the quantifications presented a relative error higher than 15%. Using clustering analysis, the metabolites with the highest relative error distributions were identified and studied to solve recurrent mistakes. Conclusions EasyLCMS is a new web application designed to quantify numerous metabolites, simultaneously integrating LC distortions and asynchronous web technology to present a visual interface with dynamic interaction which allows checking and correction of LC-MS raw data pre-processing results. Moreover, quantified data obtained with EasyLCMS are fully compatible with numerous downstream applications, as well as for mathematical modelling in the systems biology field. PMID:22884039
Headspace gas chromatographic method for the measurement of difluoroethane in blood.
Broussard, L A; Broussard, A; Pittman, T; Lafferty, D; Presley, L
2001-01-01
To develop a gas chromatographic assay for the analysis of difluoroethane, a volatile substance, in blood and to determine assay characteristics including linearity, limit of quantitation, precision, and specificity. Referral toxicology laboratory Difluoroethane, a colorless, odorless, highly flammable gas used as a refrigerant blend component and aerosol propellant, may be abused via inhalation. A headspace gas chromatographic procedure for the identification and quantitation of difluoroethane in blood is presented. A methanolic stock standard prepared from pure gaseous difluoroethane was used to prepare whole blood calibrators. Quantitation of difluoroethane was performed using a six-point calibration curve and an internal standard of 1-propanol. The assay is linear from 0 to 115 mg/L including a low calibrator at 4 mg/L, the limit of quantitation. Within-run coefficients of variation at mean concentrations of 13.8 mg/L and 38.5 mg/L were 5.8% and 6.8% respectively. Between-run coefficients of variation at mean concentrations of 15.9 mg/L and 45.7 mg/L were 13.4% and 9.8% respectively. Several volatile substances were tested as potential interfering compounds with propane having a retention time identical to that of difluoroethane. This method requires minimal sample preparation, is rapid and reproducible, can be modified for the quantitation of other volatiles, and could be automated using an automatic sampler/injector system.
GEMAS: Colours of dry and moist agricultural soil samples of Europe
NASA Astrophysics Data System (ADS)
Klug, Martin; Fabian, Karl; Reimann, Clemens
2016-04-01
High resolution HDR colour images of all Ap samples from the GEMAS survey were acquired using a GeoTek Linescan camera. Three measurements of dry and wet samples with increasing exposure time and increasing illumination settings produced a set of colour images at 50μm resolution. Automated image processing was used to calibrate the six images per sample with respect to the synchronously measured X-Rite colorchecker chart. The calibrated images were then fit to Munsell soil colours that were measured in the same way. The results provide overview maps of dry and moist European soil colours. Because colour is closely linked to iron mineralogy, carbonate, silicate and organic carbon content the results can be correlated to magnetic, mineralogical, and geochemical properties. In combination with the full GEMAS chemical and physical measurements, this yields a valuable data set for calibration and interpretation of visible satellite colour data with respect to chemical composition and geological background, soil moisture, and soil degradation. This data set will help to develop new methods for world-wide characterization and monitoring of agricultural soils which is essential for quantifying geologic and human impact on the critical zone environment. It furthermore enables the scientific community and governmental authorities to monitor consequences of climatic change, to plan and administrate economic and ecological land use, and to use the data set for forensic applications.
NASA Astrophysics Data System (ADS)
Moskalenko, Konstantin L.; Sobolev, Nikolai V.; Adamovskay, Inna A.; Stepanov, Eugene V.; Nadezhdinskii, Alexander I.; McKenna-Lawlor, Susan
1994-01-01
Measurements of carbon monoxide and carbon dioxide concentrations by registration of high resolution absorption spectra are described. A fully automated diode laser system developed to simultaneously measure CO and CO2, with sensitivity for CO up to 50 ppb and CO2 up to 0.1 vol%, is described. Calculation of CO and CO2 concentrations was carried out on the base of a priori date on strength and broadening coefficients of detected absorption lines. Test procedures of such diode laser systems are described. Possible reasons affected on accuracy and reliability of obtained data (e.g., the value of diode lasers spontaneous radiation, the stability of CO content in a cell, etc.) for absolute and relative calibration procedure are discussed. The physiological level of CO concentration in the breath of non smokers and smokers under different ambient conditions of CO concentrations in the atmosphere (in Moscow and in Maynooth) are compared. Recent results on statistical studies of the behavior of CO concentrations as a function of breath holding time are represented.
Yoshioka, Craig; Pulokas, James; Fellmann, Denis; Potter, Clinton S.; Milligan, Ronald A.; Carragher, Bridget
2007-01-01
Visualization by electron microscopy has provided many insights into the composition, quaternary structure, and mechanism of macromolecular assemblies. By preserving samples in stain or vitreous ice it is possible to image them as discrete particles, and from these images generate three-dimensional structures. This ‘single-particle’ approach suffers from two major shortcomings; it requires an initial model to reconstitute 2D data into a 3D volume, and it often fails when faced with conformational variability. Random conical tilt (RCT) and orthogonal tilt (OTR) are methods developed to overcome these problems, but the data collection required, particularly for vitreous ice specimens, is difficult and tedious. In this paper we present an automated approach to RCT/OTR data collection that removes the burden of manual collection and offers higher quality and throughput than is otherwise possible. We show example datasets collected under stain and cryo conditions and provide statistics related to the efficiency and robustness of the process. Furthermore, we describe the new algorithms that make this method possible, which include new calibrations, improved targeting and feature-based tracking. PMID:17524663
Design of microcontroller based system for automation of streak camera.
Joshi, M J; Upadhyay, J; Deshpande, P P; Sharma, M L; Navathe, C P
2010-08-01
A microcontroller based system has been developed for automation of the S-20 optical streak camera, which is used as a diagnostic tool to measure ultrafast light phenomenon. An 8 bit MCS family microcontroller is employed to generate all control signals for the streak camera. All biasing voltages required for various electrodes of the tubes are generated using dc-to-dc converters. A high voltage ramp signal is generated through a step generator unit followed by an integrator circuit and is applied to the camera's deflecting plates. The slope of the ramp can be changed by varying values of the capacitor and inductor. A programmable digital delay generator has been developed for synchronization of ramp signal with the optical signal. An independent hardwired interlock circuit has been developed for machine safety. A LABVIEW based graphical user interface has been developed which enables the user to program the settings of the camera and capture the image. The image is displayed with intensity profiles along horizontal and vertical axes. The streak camera was calibrated using nanosecond and femtosecond lasers.
Vasilyeva, I V; Shvirev, S L; Arseniev, S B; Zarubina, T V
2013-01-01
The aim of the present study is to assess a possibility and validity of prognostic scales ISS-RTS-TRISS, PRISM, APACHE II and PTS to be used for the automated calculation in decision support when treating children with severe mechanical traumas. The mentioned scales are used in the Hospital Information System (HIS) MEDIALOG. The retrospective study was conducted using clinical and physiological data collected at the admission and during the first 24 hours of hospitalization in 166 patients. Scales PRISM, APACHE II, ISS-RTS-TRISS were used for calculating the severity of injury and for prognosis in death outcomes. Scale PTS was used for evaluating the severity index only. Our research has shown that ISS-RTS-TRISS has excellent discrimination ability, PRISM and APACHE II prognostic scales have acceptable discrimination ability; moreover, they all have significant calibration ability. PTS scale has acceptable discrimination ability. It has been showed that automated calculation scales ISS-RTS-TRISS, PRISM, APACHE II and PTS are useful for assessing outcomes in children with severe mechanical trauma.
Design of microcontroller based system for automation of streak camera
DOE Office of Scientific and Technical Information (OSTI.GOV)
Joshi, M. J.; Upadhyay, J.; Deshpande, P. P.
2010-08-15
A microcontroller based system has been developed for automation of the S-20 optical streak camera, which is used as a diagnostic tool to measure ultrafast light phenomenon. An 8 bit MCS family microcontroller is employed to generate all control signals for the streak camera. All biasing voltages required for various electrodes of the tubes are generated using dc-to-dc converters. A high voltage ramp signal is generated through a step generator unit followed by an integrator circuit and is applied to the camera's deflecting plates. The slope of the ramp can be changed by varying values of the capacitor and inductor.more » A programmable digital delay generator has been developed for synchronization of ramp signal with the optical signal. An independent hardwired interlock circuit has been developed for machine safety. A LABVIEW based graphical user interface has been developed which enables the user to program the settings of the camera and capture the image. The image is displayed with intensity profiles along horizontal and vertical axes. The streak camera was calibrated using nanosecond and femtosecond lasers.« less
An automated 3D reconstruction method of UAV images
NASA Astrophysics Data System (ADS)
Liu, Jun; Wang, He; Liu, Xiaoyang; Li, Feng; Sun, Guangtong; Song, Ping
2015-10-01
In this paper a novel fully automated 3D reconstruction approach based on low-altitude unmanned aerial vehicle system (UAVs) images will be presented, which does not require previous camera calibration or any other external prior knowledge. Dense 3D point clouds are generated by integrating orderly feature extraction, image matching, structure from motion (SfM) and multi-view stereo (MVS) algorithms, overcoming many of the cost, time limitations of rigorous photogrammetry techniques. An image topology analysis strategy is introduced to speed up large scene reconstruction by taking advantage of the flight-control data acquired by UAV. Image topology map can significantly reduce the running time of feature matching by limiting the combination of images. A high-resolution digital surface model of the study area is produced base on UAV point clouds by constructing the triangular irregular network. Experimental results show that the proposed approach is robust and feasible for automatic 3D reconstruction of low-altitude UAV images, and has great potential for the acquisition of spatial information at large scales mapping, especially suitable for rapid response and precise modelling in disaster emergency.
NASA Astrophysics Data System (ADS)
Keane, Tommy P.; Saber, Eli; Rhody, Harvey; Savakis, Andreas; Raj, Jeffrey
2012-04-01
Contemporary research in automated panorama creation utilizes camera calibration or extensive knowledge of camera locations and relations to each other to achieve successful results. Research in image registration attempts to restrict these same camera parameters or apply complex point-matching schemes to overcome the complications found in real-world scenarios. This paper presents a novel automated panorama creation algorithm by developing an affine transformation search based on maximized mutual information (MMI) for region-based registration. Standard MMI techniques have been limited to applications with airborne/satellite imagery or medical images. We show that a novel MMI algorithm can approximate an accurate registration between views of realistic scenes of varying depth distortion. The proposed algorithm has been developed using stationary, color, surveillance video data for a scenario with no a priori camera-to-camera parameters. This algorithm is robust for strict- and nearly-affine-related scenes, while providing a useful approximation for the overlap regions in scenes related by a projective homography or a more complex transformation, allowing for a set of efficient and accurate initial conditions for pixel-based registration.
Electronic drop sensing in microfluidic devices: automated operation of a nanoliter viscometer
Srivastava, Nimisha; Burns, Mark A.
2007-01-01
We describe three droplet sensing techniques: a digital electrode, an analog electrode, and a thermal method. All three techniques use a single layer of metal lines that is easy to microfabricate and an electronic signal can be produced using low DC voltages. While the electrode methods utilize changes in electrical conductivity when the air/liquid interface of the droplet passes over a pair of electrodes, the thermal method is based on convective heat loss from a locally heated region. For the electrode method, the analog technique is able to detect 25 nL droplets while the digital technique is capable of detecting droplets as small as 100 pL. For thermal sensing, temperature profiles in the range of 36 °C and higher were used. Finally, we have used the digital electrode method and an array of electrodes located at preset distances to automate the operation of a previously described microfluidic viscometer. The viscometer is completely controlled by a laptop computer, and the total time for operation including setup, calibration, sample addition and viscosity calculation is approximately 4 minutes. PMID:16738725
Innovative Technology Transfer Partnerships
NASA Technical Reports Server (NTRS)
Kohler, Jeff
2004-01-01
The National Aeronautics and Space Administration (NASA) seeks to license its Advanced Tire and Strut Pressure Monitor (TSPM) technology. The TSPM is a handheld system to accurately measure tire and strut pressure and temperature over a wide temperature range (20 to 120 OF), as well as improve personnel safety. Sensor accuracy, electronics design, and a simple user interface allow operators quick, easy access to required measurements. The handheld electronics, powered by 12-VAC or by 9-VDC batteries, provide the user with an easy-to-read visual display of pressure/temperature or the streaming of pressure/temperature data via an RS-232 interface. When connected to a laptop computer, this new measurement system can provide users with automated data recording and trending, eliminating the chance for data hand-recording errors. In addition, calibration software allows for calibration data to be automatically utilized for the generation of new data conversion equations, simplifying the calibration processes that are so critical to reliable measurements. The design places a high-accuracy pressure sensor (also used as a temperature sensor) as close to the tire or strut measurement location as possible, allowing the user to make accurate measurements rapidly, minimizing the amount of high-pressure volumes, and allowing reasonable distance between the tire or strut and the operator. The pressure sensor attaches directly to the pressure supply/relief valve on the tire and/or strut, with necessary electronics contained in the handheld enclosure. A software algorithm ensures high accuracy of the device over the wide temperature range. Using the pressure sensor as a temperature sensor permits measurement of the actual temperature of the pressurized gas. This device can be adapted to create a portable calibration standard that does not require thermal conditioning. This allows accurate pressure measurements without disturbing the gas temperature. In-place calibration can save considerable time and money and is suitable in many process applications throughout industry.
Hyer, D; Mart, C
2012-06-01
The aim of this study was to develop a phantom and analysis software that could be used to quickly and accurately determine the location of radiation isocenter using the Electronic Portal Imaging Device (EPID). The phantom could then be used as a static reference point for performing other tests including: radiation vs. light field coincidence, MLC and Jaw strip tests, and Varian Optical Guidance Platform (OGP) calibration. The solution proposed uses a collimator setting of 10×10 cm to acquire EPID images of the new phantom constructed from LEGO® blocks. Images from a number of gantry and collimator angles are analyzed by the software to determine the position of the jaws and center of the phantom in each image. The distance between a chosen jaw and the phantom center is then compared to the same distance measured after a 180 degree collimator rotation to determine if the phantom is centered in the dimension being investigated. The accuracy of the algorithm's measurements were verified by independent measurement to be approximately equal to the detector's pitch. Light versus radiation field as well as MLC and Jaw strip tests are performed using measurements based on the phantom center once located at the radiation isocenter. Reproducibility tests show that the algorithm's results were objectively repeatable. Additionally, the phantom and software are completely independent of linac vendor and this study presents results from two major linac manufacturers. An OGP calibration array was also integrated into the phantom to allow calibration of the OGP while the phantom is positioned at radiation isocenter to reduce setup uncertainty contained in the calibration. This solution offers a quick, objective method to perform isocenter localization as well as laser alignment, OGP calibration, and other tests on a monthly basis. © 2012 American Association of Physicists in Medicine.
NASA Astrophysics Data System (ADS)
McClure, C.; Jaffe, D. A.; Edgerton, E.; Jansen, J. J.
2013-12-01
During the summer of 2013, we initiated a project to examine the performance of Tekran measurements of Gaseous Oxidized Mercury (GOM) with a pyrolysis method at the North Birmingham SEARCH site. Measurements started in June 2013 and will run until September 2013. This project responds to recent studies that indicate problems with the KCl denuder method for collection of GOM (e.g. Lyman et al., 2010; Gustin et al., 2013; Ambrose et al., 2013). For this project, we compared two GOM measurement systems, one using the KCl denuder method and a second method using high temperature pyrolysis of Hg compounds and detection of the resulting Hg0 vapors. Both instruments were also calibrated using an HgBr2 source to understand the recovery of one possible atmospheric GOM constituent. Both instruments sampled from a common, heated manifold. Past work has shown that in order to fully transmit HgBr2 sample lines must be made from PFA lines and heated to 100 °C. The transmission rate of HgBr2 during this project is approximately 90% over 25 feet of sample tubing at this temperature. Very preliminary results from this study have found that the transmitted HgBr2 is captured with 95% efficiency in carbon-scrubbed ambient air for both the KCl denuder and the pyrolysis method. However, the denuder method appears to be significantly less efficient in the capture of GOM when sampling unaltered ambient air versus the pyrolysis validation of total Hg0. Therefore, calibration of GOM measurements is essential in order to accurately correct for fluctuations in the GOM capture efficiency. We have also found that calibrations for GOM can be done routinely in the field and that these are essential to fully understand the GOM measurements. At present our calibration system is performed manually, but in principle this method could be readily automated.
A Computer Program for Flow-Log Analysis of Single Holes (FLASH)
Day-Lewis, F. D.; Johnson, C.D.; Paillet, Frederick L.; Halford, K.J.
2011-01-01
A new computer program, FLASH (Flow-Log Analysis of Single Holes), is presented for the analysis of borehole vertical flow logs. The code is based on an analytical solution for steady-state multilayer radial flow to a borehole. The code includes options for (1) discrete fractures and (2) multilayer aquifers. Given vertical flow profiles collected under both ambient and stressed (pumping or injection) conditions, the user can estimate fracture (or layer) transmissivities and far-field hydraulic heads. FLASH is coded in Microsoft Excel with Visual Basic for Applications routines. The code supports manual and automated model calibration. ?? 2011, The Author(s). Ground Water ?? 2011, National Ground Water Association.
NASA Astrophysics Data System (ADS)
Kugeiko, M. M.; Lisenko, S. A.
2008-07-01
An easily automated method for determining the real part of the refractive index of human blood erythrocytes in the range 0.3 1.2 μm is proposed. The method is operationally and metrologically reliable and is based on the measurement of the coefficients of light scattering from forward and backward hemisphere by two pairs of angles and on the use of multiple regression equations. An engineering solution for constructing a measurement system according to this method is proposed, which makes it possible to maximally reduce the calibration errors and effects of destabilizing factors.
Hoelsher, James W.; Hegland, Joel E.; Braunlich, Peter F.; Tetzlaff, Wolfgang
1992-01-01
Radiation dosimeters and dosimeter badges. The dosimeter badges include first and second parts which are connected to join using a securement to produce a sealed area in which at least one dosimeter is held and protected. The badge parts are separated to expose the dosimeters to a stimulating laser beam used to read dose exposure information therefrom. The badge is constructed to allow automated disassembly and reassembly in a uniquely fitting relationship. An electronic memory is included to provide calibration and identification information used during reading of the dosimeter. Dosimeter mounts which reduce thermal heating requirements are shown. Dosimeter constructions and production methods using thin substrates and phosphor binder-layers applied thereto are also taught.
NASA Technical Reports Server (NTRS)
Burnett, S. Kay; Forsyth, Theodore J.; Maynard, Everett E.
1987-01-01
The development of a computerized instrumentation test plan (ITP) for the NASA/Ames Research Center National Full Scale Aerodynamics Complex (NFAC) is discussed. The objective of the ITP program was to aid the instrumentation engineer in documenting the configuration and calibration of data acquisition systems for a given test at any of four low speed wind tunnel facilities (Outdoor Aerodynamic Research Facility, 7 x 10, 40 x 80, and 80 x 120) at the NFAC. It is noted that automation of the ITP has decreased errors, engineering hours, and setup time while adding a higher level of consistency and traceability.
Automated acoustic intensity measurements and the effect of gear tooth profile on noise
NASA Technical Reports Server (NTRS)
Atherton, William J.; Pintz, Adam; Lewicki, David G.
1987-01-01
Acoustic intensity measurements were made at NASA Lewis Research Center on a spur gear test apparatus. The measurements were obtained with the Robotic Acoustic Intensity Measurement System developed by Cleveland State University. This system provided dense spatial positioning, and was calibrated against a high quality acoustic intensity system. The measured gear noise compared gearsets having two different tooth profiles. The tests evaluated the sound field of the different gears for two speeds and three loads. The experimental results showed that gear tooth profile had a major effect on measured noise. Load and speed were found to have an effect on noise also.
Application of IR imaging for free-surface velocity measurement in liquid-metal systems
Hvasta, M. G.; Kolemen, E.; Fisher, A.
2017-01-05
Measuring free-surface, liquid-metal flow velocity is challenging to do in a reliable and accurate manner. This paper presents a non-invasive, easily calibrated method of measuring the surface velocities of open-channel liquid-metal flows using an IR camera. Unlike other spatially limited methods, this IR camera particle tracking technique provides full field-of-view data that can be used to better understand open-channel flows and determine surface boundary conditions. Lastly, this method could be implemented and automated for a wide range of liquid-metal experiments, even if they operate at high-temperatures or within strong magnetic fields.
Report of the panel on international programs
NASA Technical Reports Server (NTRS)
Anderson, Allen Joel; Fuchs, Karl W.; Ganeka, Yasuhiro; Gaur, Vinod; Green, Andrew A.; Siegfried, W.; Lambert, Anthony; Rais, Jacub; Reighber, Christopher; Seeger, Herman
1991-01-01
The panel recommends that NASA participate and take an active role in the continuous monitoring of existing regional networks, the realization of high resolution geopotential and topographic missions, the establishment of interconnection of the reference frames as defined by different space techniques, the development and implementation of automation for all ground-to-space observing systems, calibration and validation experiments for measuring techniques and data, the establishment of international space-based networks for real-time transmission of high density space data in standardized formats, tracking and support for non-NASA missions, and the extension of state-of-the art observing and analysis techniques to developing nations.
The Ionosphere and Ocean Altimetry
NASA Technical Reports Server (NTRS)
Lindqwister, Ulf J.
1999-01-01
The accuracy of satellite-based single-frequency radar ocean altimeters benefits from calibration of the total electron content (TEC) of the ionosphere below the satellite. Data from the global network of Global Positioning System (GPS) receivers provides timely, continuous, and globally well-distributed measurements of ionospheric electron content. We have created a daily automated process called Daily Global Ionospheric Map (Daily-GIM) whose primary purpose is to use global GPS data to provide ionospheric calibration data for the Geosat Follow-On (GFO) ocean altimeter. This process also produces an hourly time-series of global maps of the electron content of the ionosphere. This system is designed to deliver "quick-look" ionospheric calibrations within 24 hours with 90+% reliability and with a root-mean-square accuracy of 2 cm at 13.6 GHz. In addition we produce a second product within 72 hours which takes advantage of additional GPS data which were not available in time for the first process. The diagram shows an example of a comparison between TEC data from the Topographic Experiment (TOPEX) ocean altimeter and Daily-GIM. TEC are displayed in TEC units, TECU, where 5 TECU is 1 cm at 13.6 GHz. Data from a single TOPEX track is shown. Also shown is the Bent climatological model TEC for the track. Although the GFO satellite is not yet in its operational mode, we have been running Daily-GIM reliably (much better than 90%) with better than 2-cm accuracy (based on comparisons against TOPEX) for several months. When timely ephemeris files for the European Remote Sensing Satellite 2 (ERS-2) are available, daily ERS-2 altimeter ionospheric calibration files are produced. When GFO ephemeris files are made available to us, we produce GFO ionosphere calibration files. Users of these GFO ionosphere calibration files find they are a great improvement over the alternative International Reference Ionosphere 1995 (IRI-95) climatological model. In addition, the TOPEX orbit determination team at JPL has been using the global ionospheric maps to calibrate the single frequency GPS data from the TOPEX receiver, and report highly significant improvements in the ephemeris. The global ionospheric maps are delivered daily to the International GPS Service (IGS), making them available to the scientific community. Additional information is contained in the original.
The Chandra Source Catalog: Algorithms
NASA Astrophysics Data System (ADS)
McDowell, Jonathan; Evans, I. N.; Primini, F. A.; Glotfelty, K. J.; McCollough, M. L.; Houck, J. C.; Nowak, M. A.; Karovska, M.; Davis, J. E.; Rots, A. H.; Siemiginowska, A. L.; Hain, R.; Evans, J. D.; Anderson, C. S.; Bonaventura, N. R.; Chen, J. C.; Doe, S. M.; Fabbiano, G.; Galle, E. C.; Gibbs, D. G., II; Grier, J. D.; Hall, D. M.; Harbo, P. N.; He, X.; Lauer, J.; Miller, J. B.; Mitschang, A. W.; Morgan, D. L.; Nichols, J. S.; Plummer, D. A.; Refsdal, B. L.; Sundheim, B. A.; Tibbetts, M. S.; van Stone, D. W.; Winkelman, S. L.; Zografou, P.
2009-09-01
Creation of the Chandra Source Catalog (CSC) required adjustment of existing pipeline processing, adaptation of existing interactive analysis software for automated use, and development of entirely new algorithms. Data calibration was based on the existing pipeline, but more rigorous data cleaning was applied and the latest calibration data products were used. For source detection, a local background map was created including the effects of ACIS source readout streaks. The existing wavelet source detection algorithm was modified and a set of post-processing scripts used to correct the results. To analyse the source properties we ran the SAO Traceray trace code for each source to generate a model point spread function, allowing us to find encircled energy correction factors and estimate source extent. Further algorithms were developed to characterize the spectral, spatial and temporal properties of the sources and to estimate the confidence intervals on count rates and fluxes. Finally, sources detected in multiple observations were matched, and best estimates of their merged properties derived. In this paper we present an overview of the algorithms used, with more detailed treatment of some of the newly developed algorithms presented in companion papers.
Design and Analysis of a Sensor System for Cutting Force Measurement in Machining Processes
Liang, Qiaokang; Zhang, Dan; Coppola, Gianmarc; Mao, Jianxu; Sun, Wei; Wang, Yaonan; Ge, Yunjian
2016-01-01
Multi-component force sensors have infiltrated a wide variety of automation products since the 1970s. However, one seldom finds full-component sensor systems available in the market for cutting force measurement in machine processes. In this paper, a new six-component sensor system with a compact monolithic elastic element (EE) is designed and developed to detect the tangential cutting forces Fx, Fy and Fz (i.e., forces along x-, y-, and z-axis) as well as the cutting moments Mx, My and Mz (i.e., moments about x-, y-, and z-axis) simultaneously. Optimal structural parameters of the EE are carefully designed via simulation-driven optimization. Moreover, a prototype sensor system is fabricated, which is applied to a 5-axis parallel kinematic machining center. Calibration experimental results demonstrate that the system is capable of measuring cutting forces and moments with good linearity while minimizing coupling error. Both the Finite Element Analysis (FEA) and calibration experimental studies validate the high performance of the proposed sensor system that is expected to be adopted into machining processes. PMID:26751451
NASA Astrophysics Data System (ADS)
Shinnaga, H.; Humphreys, E.; Indebetouw, R.; Villard, E.; Kern, J.; Davis, L.; Miura, R. E.; Nakazato, T.; Sugimoto, K.; Kosugi, G.; Akiyama, E.; Muders, D.; Wyrowski, F.; Williams, S.; Lightfoot, J.; Kent, B.; Momjian, E.; Hunter, T.; ALMA Pipeline Team
2015-12-01
The ALMA Pipeline is the automated data reduction tool that runs on ALMA data. Current version of the ALMA pipeline produces science quality data products for standard interferometric observing modes up to calibration process. The ALMA Pipeline is comprised of (1) heuristics in the form of Python scripts that select the best processing parameters, and (2) contexts that are given for book-keeping purpose of data processes. The ALMA Pipeline produces a "weblog" that showcases detailed plots for users to judge how each step of calibration processes are treated. The ALMA Interferometric Pipeline was conditionally accepted in March 2014 by processing Cycle 0 and Cycle 1 data sets. From Cycle 2, ALMA Pipeline is used for ALMA data reduction and quality assurance for the projects whose observing modes are supported by the ALMA Pipeline. Pipeline tasks are available based on CASA version 4.2.2, and the first public pipeline release called CASA 4.2.2-pipe has been available since October 2014. One can reduce ALMA data both by CASA tasks as well as by pipeline tasks by using CASA version 4.2.2-pipe.
Design and Analysis of a Sensor System for Cutting Force Measurement in Machining Processes.
Liang, Qiaokang; Zhang, Dan; Coppola, Gianmarc; Mao, Jianxu; Sun, Wei; Wang, Yaonan; Ge, Yunjian
2016-01-07
Multi-component force sensors have infiltrated a wide variety of automation products since the 1970s. However, one seldom finds full-component sensor systems available in the market for cutting force measurement in machine processes. In this paper, a new six-component sensor system with a compact monolithic elastic element (EE) is designed and developed to detect the tangential cutting forces Fx, Fy and Fz (i.e., forces along x-, y-, and z-axis) as well as the cutting moments Mx, My and Mz (i.e., moments about x-, y-, and z-axis) simultaneously. Optimal structural parameters of the EE are carefully designed via simulation-driven optimization. Moreover, a prototype sensor system is fabricated, which is applied to a 5-axis parallel kinematic machining center. Calibration experimental results demonstrate that the system is capable of measuring cutting forces and moments with good linearity while minimizing coupling error. Both the Finite Element Analysis (FEA) and calibration experimental studies validate the high performance of the proposed sensor system that is expected to be adopted into machining processes.
Fast and accurate enzyme activity measurements using a chip-based microfluidic calorimeter.
van Schie, Morten M C H; Ebrahimi, Kourosh Honarmand; Hagen, Wilfred R; Hagedoorn, Peter-Leon
2018-03-01
Recent developments in microfluidic and nanofluidic technologies have resulted in development of new chip-based microfluidic calorimeters with potential use in different fields. One application would be the accurate high-throughput measurement of enzyme activity. Calorimetry is a generic way to measure activity of enzymes, but unlike conventional calorimeters, chip-based calorimeters can be easily automated and implemented in high-throughput screening platforms. However, application of chip-based microfluidic calorimeters to measure enzyme activity has been limited due to problems associated with miniaturization such as incomplete mixing and a decrease in volumetric heat generated. To address these problems we introduced a calibration method and devised a convenient protocol for using a chip-based microfluidic calorimeter. Using the new calibration method, the progress curve of alkaline phosphatase, which has product inhibition for phosphate, measured by the calorimeter was the same as that recorded by UV-visible spectroscopy. Our results may enable use of current chip-based microfluidic calorimeters in a simple manner as a tool for high-throughput screening of enzyme activity with potential applications in drug discovery and enzyme engineering. Copyright © 2017. Published by Elsevier Inc.
Gamma/x-ray linear pushbroom stereo for 3D cargo inspection
NASA Astrophysics Data System (ADS)
Zhu, Zhigang; Hu, Yu-Chi
2006-05-01
For evaluating the contents of trucks, containers, cargo, and passenger vehicles by a non-intrusive gamma-ray or X-ray imaging system to determine the possible presence of contraband, three-dimensional (3D) measurements could provide more information than 2D measurements. In this paper, a linear pushbroom scanning model is built for such a commonly used gamma-ray or x-ray cargo inspection system. Accurate 3D measurements of the objects inside a cargo can be obtained by using two such scanning systems with different scanning angles to construct a pushbroom stereo system. A simple but robust calibration method is proposed to find the important parameters of the linear pushbroom sensors. Then, a fast and automated stereo matching algorithm based on free-form deformable registration is developed to obtain 3D measurements of the objects under inspection. A user interface is designed for 3D visualization of the objects in interests. Experimental results of sensor calibration, stereo matching, 3D measurements and visualization of a 3D cargo container and the objects inside, are presented.
Chai, X S; Schork, F J; DeCinque, Anthony
2005-04-08
This paper reports an improved headspace gas chromatographic (GC) technique for determination of monomer solubilities in water. The method is based on a multiple headspace extraction GC technique developed previously [X.S. Chai, Q.X. Hou, F.J. Schork, J. Appl. Polym. Sci., in press], but with the major modification in the method calibration technique. As a result, only a few iterations of headspace extraction and GC measurement are required, which avoids the "exhaustive" headspace extraction, and thus the experimental time for each analysis. For highly insoluble monomers, effort must be made to minimize adsorption in the headspace sampling channel, transportation conduit and capillary column by using higher operating temperature and a short capillary column in the headspace sampler and GC system. For highly water soluble monomers, a new calibration method is proposed. The combinations of these technique modifications results in a method that is simple, rapid and automated. While the current focus of the authors is on the determination of monomer solubility in aqueous solutions, the method should be applicable to determination of solubility of any organic in water.
Information theoretic methods for image processing algorithm optimization
NASA Astrophysics Data System (ADS)
Prokushkin, Sergey F.; Galil, Erez
2015-01-01
Modern image processing pipelines (e.g., those used in digital cameras) are full of advanced, highly adaptive filters that often have a large number of tunable parameters (sometimes > 100). This makes the calibration procedure for these filters very complex, and the optimal results barely achievable in the manual calibration; thus an automated approach is a must. We will discuss an information theory based metric for evaluation of algorithm adaptive characteristics ("adaptivity criterion") using noise reduction algorithms as an example. The method allows finding an "orthogonal decomposition" of the filter parameter space into the "filter adaptivity" and "filter strength" directions. This metric can be used as a cost function in automatic filter optimization. Since it is a measure of a physical "information restoration" rather than perceived image quality, it helps to reduce the set of the filter parameters to a smaller subset that is easier for a human operator to tune and achieve a better subjective image quality. With appropriate adjustments, the criterion can be used for assessment of the whole imaging system (sensor plus post-processing).
Optical power of VCSELs stabilized to 35 ppm/°C without a TEC
NASA Astrophysics Data System (ADS)
Downing, John
2015-03-01
This paper reports a method and system comprising a light source, an electronic method, and a calibration procedure for stabilizing the optical power of vertical-cavity surface-emitting lasers (VCSELs) and laser diodes (LDs) without the use thermoelectric coolers (TECs). The system eliminates the needs for custom interference coatings, polarization adjustments, and the exact alignment required by the optical method reported in 2013 [1]. It can precisely compensate for the effects of temperature and wavelength drift on photodiode responsivity as well as changes in VCSEL beam quality and polarization angle over a 50°C temperature range. Data obtained from light sources built with single-mode polarization-locked VCSELs demonstrate that 30 ppm/°C stability can be readily obtained. The system has advantages over TECstabilized laser modules that include: 1) 90% lower relative RMS optical power and temperature sensitivity, 2) a five-fold enhancement of wall-plug efficiency, 3) less component testing and sorting, 4) lower manufacturing costs, and 5) automated calibration in batches at time of manufacture is practical. The system is ideally suited for battery-powered environmental and in-home medical monitoring applications.
The ALMA Science Pipeline: Current Status
NASA Astrophysics Data System (ADS)
Humphreys, Elizabeth; Miura, Rie; Brogan, Crystal L.; Hibbard, John; Hunter, Todd R.; Indebetouw, Remy
2016-09-01
The ALMA Science Pipeline is being developed for the automated calibration and imaging of ALMA interferometric and single-dish data. The calibration Pipeline for interferometric data was accepted for use by ALMA Science Operations in 2014, and for single-dish data end-to-end processing in 2015. However, work is ongoing to expand the use cases for which the Pipeline can be used e.g. for higher frequency and lower signal-to-noise datasets, and for new observing modes. A current focus includes the commissioning of science target imaging for interferometric data. For the Single Dish Pipeline, the line finding algorithm used in baseline subtraction and baseline flagging heuristics have been greately improved since the prototype used for data from the previous cycle. These algorithms, unique to the Pipeline, produce better results than standard manual processing in many cases. In this poster, we report on the current status of the Pipeline capabilities, present initial results from the Imaging Pipeline, and the smart line finding and flagging algorithm used in the Single Dish Pipeline. The Pipeline is released as part of CASA (the Common Astronomy Software Applications package).
CFHT data processing and calibration ESPaDOnS pipeline: Upena and OPERA (optical spectropolarimetry)
NASA Astrophysics Data System (ADS)
Martioli, Eder; Teeple, D.; Manset, Nadine
2011-03-01
CFHT is ESPaDOnS responsible for processing raw images, removing instrument related artifacts, and delivering science-ready data to the PIs. Here we describe the Upena pipeline, which is the software used to reduce the echelle spectro-polarimetric data obtained with the ESPaDOnS instrument. Upena is an automated pipeline that performs calibration and reduction of raw images. Upena has the capability of both performing real-time image-by-image basis reduction and a post observing night complete reduction. Upena produces polarization and intensity spectra in FITS format. The pipeline is designed to perform parallel computing for improved speed, which assures that the final products are delivered to the PIs before noon HST after each night of observations. We also present the OPERA project, which is an open-source pipeline to reduce ESPaDOnS data that will be developed as a collaborative work between CFHT and the scientific community. OPERA will match the core capabilities of Upena and in addition will be open-source, flexible and extensible.
Development and implementation of an EPID-based method for localizing isocenter.
Hyer, Daniel E; Mart, Christopher J; Nixon, Earl
2012-11-08
The aim of this study was to develop a phantom and analysis software that could be used to quickly and accurately determine the location of radiation isocenter to an accuracy of less than 1 mm using the EPID (Electronic Portal Imaging Device). The proposed solution uses a collimator setting of 10 × 10 cm2 to acquire EPID images of a new phantom constructed from LEGO blocks. Images from a number of gantry and collimator angles are analyzed by automated analysis software to determine the position of the jaws and center of the phantom in each image. The distance between a chosen jaw and the phantom center is then compared to the same distance measured after a 180° collimator rotation to determine if the phantom is centered in the dimension being investigated. Repeated tests show that the system is reproducibly independent of the imaging session, and calculated offsets of the phantom from radiation isocenter are a function of phantom setup only. Accuracy of the algorithm's calculated offsets were verified by imaging the LEGO phantom before and after applying the calculated offset. These measurements show that the offsets are predicted with an accuracy of approximately 0.3 mm, which is on the order of the detector's pitch. Comparison with a star-shot analysis yielded agreement of isocenter location within 0.5 mm. Additionally, the phantom and software are completely independent of linac vendor, and this study presents results from two linac manufacturers. A Varian Optical Guidance Platform (OGP) calibration array was also integrated into the phantom to allow calibration of the OGP while the phantom is positioned at radiation isocenter to reduce setup uncertainty in the calibration. This solution offers a quick, objective method to perform isocenter localization as well as laser alignment and OGP calibration on a monthly basis.
van Schaick, Willem; van Dooren, Bart T H; Mulder, Paul G H; Völker-Dieben, Hennie J M
2005-07-01
To report on the calibration of the Topcon SP-2000P specular microscope and the Endothelial Cell Analysis Module of the IMAGEnet 2000 software, and to establish the validity of the different endothelial cell density (ECD) assessment methods available in these instruments. Using an external microgrid, we calibrated the magnification of the SP-2000P and the IMAGEnet software. In both eyes of 36 volunteers, we validated 4 ECD assessment methods by comparing these methods to the gold standard manual ECD, manual counting of cells on a video print. These methods were: the estimated ECD, estimation of ECD with a reference grid on the camera screen; the SP-2000P ECD, pointing out whole contiguous cells on the camera screen; the uncorrected IMAGEnet ECD, using automatically drawn cell borders, and the corrected IMAGEnet ECD, with manual correction of incorrectly drawn cell borders in the automated analysis. Validity of each method was evaluated by calculating both the mean difference with the manual ECD and the limits of agreement as described by Bland and Altman. Preset factory values of magnification were incorrect, resulting in errors in ECD of up to 9%. All assessments except 1 of the estimated ECDs differed significantly from manual ECDs, with most differences being similar (< or =6.5%), except for uncorrected IMAGEnet ECD (30.2%). Corrected IMAGEnet ECD showed the narrowest limits of agreement (-4.9 to +19.3%). We advise checking the calibration of magnification in any specular microscope or endothelial analysis software as it may be erroneous. Corrected IMAGEnet ECD is the most valid of the investigated methods in the Topcon SP-2000P/IMAGEnet 2000 combination.
Calibration Test Set for a Phase-Comparison Digital Tracker
NASA Technical Reports Server (NTRS)
Boas, Amy; Li, Samuel; McMaster, Robert
2007-01-01
An apparatus that generates four signals at a frequency of 7.1 GHz having precisely controlled relative phases and equal amplitudes has been designed and built. This apparatus is intended mainly for use in computer-controlled automated calibration and testing of a phase-comparison digital tracker (PCDT) that measures the relative phases of replicas of the same X-band signal received by four antenna elements in an array. (The relative direction of incidence of the signal on the array is then computed from the relative phases.) The present apparatus can also be used to generate precisely phased signals for steering a beam transmitted from a phased antenna array. The apparatus (see figure) includes a 7.1-GHz signal generator, the output of which is fed to a four-way splitter. Each of the four splitter outputs is attenuated by 10 dB and fed as input to a vector modulator, wherein DC bias voltages are used to control the in-phase (I) and quadrature (Q) signal components. The bias voltages are generated by digital-to-analog- converter circuits on a control board that receives its digital control input from a computer running a LabVIEW program. The outputs of the vector modulators are further attenuated by 10 dB, then presented at high-grade radio-frequency connectors. The attenuation reduces the effects of changing mismatch and reflections. The apparatus was calibrated in a process in which the bias voltages were first stepped through all possible IQ settings. Then in a reverse interpolation performed by use of MATLAB software, a lookup table containing 3,600 IQ settings, representing equal amplitude and phase increments of 0.1 , was created for each vector modulator. During operation of the apparatus, these lookup tables are used in calibrating the PCDT.
Interpreting observational studies: why empirical calibration is needed to correct p-values
Schuemie, Martijn J; Ryan, Patrick B; DuMouchel, William; Suchard, Marc A; Madigan, David
2014-01-01
Often the literature makes assertions of medical product effects on the basis of ‘ p < 0.05’. The underlying premise is that at this threshold, there is only a 5% probability that the observed effect would be seen by chance when in reality there is no effect. In observational studies, much more than in randomized trials, bias and confounding may undermine this premise. To test this premise, we selected three exemplar drug safety studies from literature, representing a case–control, a cohort, and a self-controlled case series design. We attempted to replicate these studies as best we could for the drugs studied in the original articles. Next, we applied the same three designs to sets of negative controls: drugs that are not believed to cause the outcome of interest. We observed how often p < 0.05 when the null hypothesis is true, and we fitted distributions to the effect estimates. Using these distributions, we compute calibrated p-values that reflect the probability of observing the effect estimate under the null hypothesis, taking both random and systematic error into account. An automated analysis of scientific literature was performed to evaluate the potential impact of such a calibration. Our experiment provides evidence that the majority of observational studies would declare statistical significance when no effect is present. Empirical calibration was found to reduce spurious results to the desired 5% level. Applying these adjustments to literature suggests that at least 54% of findings with p < 0.05 are not actually statistically significant and should be reevaluated. © 2013 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd. PMID:23900808
Evaluation of digital radiography practice using exposure index tracking
Zhou, Yifang; Allahverdian, Janet; Nute, Jessica L.; Lee, Christina
2016-01-01
Some digital radiography (DR) detectors and software allow for remote download of exam statistics, including image reject status, body part, projection, and exposure index (EI). The ability to have automated data collection from multiple DR units is conducive to a quality control (QC) program monitoring institutional radiographic exposures. We have implemented such a QC program with the goal to identify outliers in machine radiation output and opportunities for improvement in radiation dose levels. We studied the QC records of four digital detectors in greater detail on a monthly basis for one year. Although individual patient entrance skin exposure varied, the radiation dose levels to the detectors were made to be consistent via phototimer recalibration. The exposure data stored on each digital detector were periodically downloaded in a spreadsheet format for analysis. EI median and standard deviation were calculated for each protocol (by body part) and EI histograms were created for torso protocols. When histograms of EI values for different units were compared, we observed differences up to 400 in average EI (representing 60% difference in radiation levels to the detector) between units nominally calibrated to the same EI. We identified distinct components of the EI distributions, which in some cases, had mean EI values 300 apart. Peaks were observed at the current calibrated EI, a previously calibrated EI, and an EI representing computed radiography (CR) techniques. Our findings in this ongoing project have allowed us to make useful interventions, from emphasizing the use of phototimers instead of institutional memory of manual techniques to improvements in our phototimer calibration. We believe that this QC program can be implemented at other sites and can reveal problems with radiation levels in the aggregate that are difficult to identify on a case‐by‐case basis. PACS number(s): 87.59.bf PMID:27929507
Fisk, Mark D.; Pasyanos, Michael E.
2016-05-03
Characterizing regional seismic signals continues to be a difficult problem due to their variability. Calibration of these signals is very important to many aspects of monitoring underground nuclear explosions, including detecting seismic signals, discriminating explosions from earthquakes, and reliably estimating magnitude and yield. Amplitude tomography, which simultaneously inverts for source, propagation, and site effects, is a leading method of calibrating these signals. A major issue in amplitude tomography is the data quality of the input amplitude measurements. Pre-event and prephase signal-to-noise ratio (SNR) tests are typically used but can frequently include bad signals and exclude good signals. The deficiencies ofmore » SNR criteria, which are demonstrated here, lead to large calibration errors. To ameliorate these issues, we introduce a semi-automated approach to assess the bandwidth of a spectrum where it behaves physically. We determine the maximum frequency (denoted as F max) where it deviates from this behavior due to inflections at which noise or spurious signals start to bias the spectra away from the expected decay. We compare two amplitude tomography runs using the SNR and new F max criteria and show significant improvements to the stability and accuracy of the tomography output for frequency bands higher than 2 Hz by using our assessments of valid S-wave bandwidth. We compare Q estimates, P/S residuals, and some detailed results to explain the improvements. Lastly, for frequency bands higher than 4 Hz, needed for effective P/S discrimination of explosions from earthquakes, the new bandwidth criteria sufficiently fix the instabilities and errors so that the residuals and calibration terms are useful for application.« less
Model and Interoperability using Meta Data Annotations
NASA Astrophysics Data System (ADS)
David, O.
2011-12-01
Software frameworks and architectures are in need for meta data to efficiently support model integration. Modelers have to know the context of a model, often stepping into modeling semantics and auxiliary information usually not provided in a concise structure and universal format, consumable by a range of (modeling) tools. XML often seems the obvious solution for capturing meta data, but its wide adoption to facilitate model interoperability is limited by XML schema fragmentation, complexity, and verbosity outside of a data-automation process. Ontologies seem to overcome those shortcomings, however the practical significance of their use remains to be demonstrated. OMS version 3 took a different approach for meta data representation. The fundamental building block of a modular model in OMS is a software component representing a single physical process, calibration method, or data access approach. Here, programing language features known as Annotations or Attributes were adopted. Within other (non-modeling) frameworks it has been observed that annotations lead to cleaner and leaner application code. Framework-supported model integration, traditionally accomplished using Application Programming Interfaces (API) calls is now achieved using descriptive code annotations. Fully annotated components for various hydrological and Ag-system models now provide information directly for (i) model assembly and building, (ii) data flow analysis for implicit multi-threading or visualization, (iii) automated and comprehensive model documentation of component dependencies, physical data properties, (iv) automated model and component testing, calibration, and optimization, and (v) automated audit-traceability to account for all model resources leading to a particular simulation result. Such a non-invasive methodology leads to models and modeling components with only minimal dependencies on the modeling framework but a strong reference to its originating code. Since models and modeling components are not directly bound to framework by the use of specific APIs and/or data types they can more easily be reused both within the framework as well as outside. While providing all those capabilities, a significant reduction in the size of the model source code was achieved. To support the benefit of annotations for a modeler, studies were conducted to evaluate the effectiveness of an annotation based framework approach with other modeling frameworks and libraries, a framework-invasiveness study was conducted to evaluate the effects of framework design on model code quality. A typical hydrological model was implemented across several modeling frameworks and several software metrics were collected. The metrics selected were measures of non-invasive design methods for modeling frameworks from a software engineering perspective. It appears that the use of annotations positively impacts several software quality measures. Experience to date has demonstrated the multi-purpose value of using annotations. Annotations are also a feasible and practical method to enable interoperability among models and modeling frameworks.
NASA Astrophysics Data System (ADS)
Holden, Peter; Lanc, Peter; Ireland, Trevor R.; Harrison, T. Mark; Foster, John J.; Bruce, Zane
2009-09-01
The identification and retrieval of a large population of ancient zircons (>4 Ga; Hadean) is of utmost priority if models of the early evolution of Earth are to be rigorously tested. We have developed a rapid and accurate U-Pb zircon age determination protocol utilizing a fully automated multi-collector ion microprobe, the ANU SHRIMP II, to screen and date these zircons. Unattended data acquisition relies on the calibration of a digitized sample map to the Sensitive High Resolution Ion MicroProbe (SHRIMP) sample-stage co-ordinate system. High precision positioning of individual grains can be produced through optical image processing of a specified mount location. The focal position of the mount can be optimized through a correlation between secondary-ion steering and the spot position on the target. For the Hadean zircon project, sample mounts are photographed and sample locations (normally grain centers) are determined off-line. The sample is loaded, reference points calibrated, and the target positions are then visited sequentially. In SHRIMP II multiple-collector mode, zircons are initially screened (ca. 5 s data acquisition) through their 204Pb corrected 207Pb/206Pb ratio; suitable candidates are then analyzed in a longer routine to obtain better measurement statistics, U/Pb, and concentration data. In SHRIMP I and SHRIMP RG, we have incorporated the automated analysis protocol to single-collector measurements. These routines have been used to analyze over 100,000 zircons from the Jack Hills quartzite. Of these, ca. 7%, have an age greater than 3.8 Ga, the oldest grain being 4372 +/- 6 Ma (2[sigma]), and this age is part of a group of analyses around 4350 Ma which we interpret as the age when continental crust first began to coalesce in this region. In multi-collector mode, the analytical time taken for a single mount with 400 zircons is approximately 6 h; whereas in single-collector mode, the analytical time is ca. 17 h. With this productivity, we can produce significant numbers of zircons for statistically limited studies including correlations between age and morphology, mineral-inclusion paragenesis, as well as isotopic studies including Hf and O isotopic compositions, Pu-Xe, and Sm-Nd isotopes.
MPL-Net data products available at co-located AERONET sites and field experiment locations
NASA Astrophysics Data System (ADS)
Welton, E. J.; Campbell, J. R.; Berkoff, T. A.
2002-05-01
Micro-pulse lidar (MPL) systems are small, eye-safe lidars capable of profiling the vertical distribution of aerosol and cloud layers. There are now over 20 MPL systems around the world, and they have been used in numerous field experiments. A new project was started at NASA Goddard Space Flight Center in 2000. The new project, MPL-Net, is a coordinated network of long-time MPL sites. The network also supports a limited number of field experiments each year. Most MPL-Net sites and field locations are co-located with AERONET sunphotometers. At these locations, the AERONET and MPL-Net data are combined together to provide both column and vertically resolved aerosol and cloud measurements. The MPL-Net project coordinates the maintenance and repair for all instruments in the network. In addition, data is archived and processed by the project using common, standardized algorithms that have been developed and utilized over the past 10 years. These procedures ensure that stable, calibrated MPL systems are operating at sites and that the data quality remains high. Rigorous uncertainty calculations are performed on all MPL-Net data products. Automated, real-time level 1.0 data processing algorithms have been developed and are operational. Level 1.0 algorithms are used to process the raw MPL data into the form of range corrected, uncalibrated lidar signals. Automated, real-time level 1.5 algorithms have also been developed and are now operational. Level 1.5 algorithms are used to calibrate the MPL systems, determine cloud and aerosol layer heights, and calculate the optical depth and extinction profile of the aerosol boundary layer. The co-located AERONET sunphotometer provides the aerosol optical depth, which is used as a constraint to solve for the extinction-to-backscatter ratio and the aerosol extinction profile. Browse images and data files are available on the MPL-Net web-site. An overview of the processing algorithms and initial results from selected sites and field experiments will be presented. The capability of the MPL-Net project to produce automated real-time (next day) profiles of aerosol extinction will be shown. Finally, early results from Level 2.0 and Level 3.0 algorithms currently under development will be presented. The level 3.0 data provide continuous (day/night) retrievals of multiple aerosol and cloud heights, and optical properties of each layer detected.
Global Rapid Flood Mapping System with Spaceborne SAR Data
NASA Astrophysics Data System (ADS)
Yun, S. H.; Owen, S. E.; Hua, H.; Agram, P. S.; Fattahi, H.; Liang, C.; Manipon, G.; Fielding, E. J.; Rosen, P. A.; Webb, F.; Simons, M.
2017-12-01
As part of the Advanced Rapid Imaging and Analysis (ARIA) project for Natural Hazards, at NASA's Jet Propulsion Laboratory and California Institute of Technology, we have developed an automated system that produces derived products for flood extent map generation using spaceborne SAR data. The system takes user's input of area of interest polygons and time window for SAR data search (pre- and post-event). Then the system automatically searches and downloads SAR data, processes them to produce coregistered SAR image pairs, and generates log amplitude ratio images from each pair. Currently the system is automated to support SAR data from the European Space Agency's Sentinel-1A/B satellites. We have used the system to produce flood extent maps from Sentinel-1 SAR data for the May 2017 Sri Lanka floods, which killed more than 200 people and displaced about 600,000 people. Our flood extent maps were delivered to the Red Cross to support response efforts. Earlier we also responded to the historic August 2016 Louisiana floods in the United States, which claimed 13 people's lives and caused over $10 billion property damage. For this event, we made synchronized observations from space, air, and ground in close collaboration with USGS and NOAA. The USGS field crews acquired ground observation data, and NOAA acquired high-resolution airborne optical imagery within the time window of +/-2 hours of the SAR data acquisition by JAXA's ALOS-2 satellite. The USGS coordinates of flood water boundaries were used to calibrate our flood extent map derived from the ALOS-2 SAR data, and the map was delivered to FEMA for estimating the number of households affected. Based on the lessons learned from this response effort, we customized the ARIA system automation for rapid flood mapping and developed a mobile friendly web app that can easily be used in the field for data collection. Rapid automatic generation of SAR-based global flood maps calibrated with independent observations from ground, air, and space will provide reliable snapshot extent of many flooding events. SAR missions with easy data access, such as the Sentinel-1 and NASA's upcoming NISAR mission, combined with the ARIA system, will enable forming a library of flood extent maps, which can soon support flood modeling community, by providing observation-based constraints.
Calibration Procedures on Oblique Camera Setups
NASA Astrophysics Data System (ADS)
Kemper, G.; Melykuti, B.; Yu, C.
2016-06-01
Beside the creation of virtual animated 3D City models, analysis for homeland security and city planning, the accurately determination of geometric features out of oblique imagery is an important task today. Due to the huge number of single images the reduction of control points force to make use of direct referencing devices. This causes a precise camera-calibration and additional adjustment procedures. This paper aims to show the workflow of the various calibration steps and will present examples of the calibration flight with the final 3D City model. In difference to most other software, the oblique cameras are used not as co-registered sensors in relation to the nadir one, all camera images enter the AT process as single pre-oriented data. This enables a better post calibration in order to detect variations in the single camera calibration and other mechanical effects. The shown sensor (Oblique Imager) is based o 5 Phase One cameras were the nadir one has 80 MPIX equipped with a 50 mm lens while the oblique ones capture images with 50 MPix using 80 mm lenses. The cameras are mounted robust inside a housing to protect this against physical and thermal deformations. The sensor head hosts also an IMU which is connected to a POS AV GNSS Receiver. The sensor is stabilized by a gyro-mount which creates floating Antenna -IMU lever arms. They had to be registered together with the Raw GNSS-IMU Data. The camera calibration procedure was performed based on a special calibration flight with 351 shoots of all 5 cameras and registered the GPS/IMU data. This specific mission was designed in two different altitudes with additional cross lines on each flying heights. The five images from each exposure positions have no overlaps but in the block there are many overlaps resulting in up to 200 measurements per points. On each photo there were in average 110 well distributed measured points which is a satisfying number for the camera calibration. In a first step with the help of the nadir camera and the GPS/IMU data, an initial orientation correction and radial correction were calculated. With this approach, the whole project was calculated and calibrated in one step. During the iteration process the radial and tangential parameters were switched on individually for the camera heads and after that the camera constants and principal point positions were checked and finally calibrated. Besides that, the bore side calibration can be performed either on basis of the nadir camera and their offsets, or independently for each camera without correlation to the others. This must be performed in a complete mission anyway to get stability between the single camera heads. Determining the lever arms of the nodal-points to the IMU centre needs more caution than for a single camera especially due to the strong tilt angle. Prepared all these previous steps, you get a highly accurate sensor that enables a fully automated data extraction with a rapid update of you existing data. Frequently monitoring urban dynamics is then possible in fully 3D environment.