Sample records for squares pacls methods

  1. Classical least squares multivariate spectral analysis

    DOEpatents

    Haaland, David M.

    2002-01-01

    An improved classical least squares multivariate spectral analysis method that adds spectral shapes describing non-calibrated components and system effects (other than baseline corrections) present in the analyzed mixture to the prediction phase of the method. These improvements decrease or eliminate many of the restrictions to the CLS-type methods and greatly extend their capabilities, accuracy, and precision. One new application of PACLS includes the ability to accurately predict unknown sample concentrations when new unmodeled spectral components are present in the unknown samples. Other applications of PACLS include the incorporation of spectrometer drift into the quantitative multivariate model and the maintenance of a calibration on a drifting spectrometer. Finally, the ability of PACLS to transfer a multivariate model between spectrometers is demonstrated.

  2. Addition of polyaluminiumchloride (PACl) to waste activated sludge to mitigate the negative effects of its sticky phase in dewatering-drying operations.

    PubMed

    Peeters, Bart; Dewil, Raf; Vernimmen, Luc; Van den Bogaert, Benno; Smets, Ilse Y

    2013-07-01

    This paper presents a new application of polyaluminiumchloride (PACl) as a conditioner for waste activated sludge prior its dewatering and drying. It is demonstrated at lab scale with a shear test-based protocol that a dose ranging from 50 to 150 g PACl/kg MLSS (mixed liquor suspended solids) mitigates the stickiness of partially dried sludge with a dry solids content between 25 and 60 %DS (dry solids). E.g., at a solids dryness of 46% DS the shear stress required to have the pre-consolidated sludge slip over a steel surface is reduced with 35%. The salient feature of PACl is further supported by torque data from a full scale decanter centrifuge used to dewater waste sludge. The maximal torque developed by the screw conveyor inside the decanter centrifuge is substantially reduced with 20% in the case the sludge feed is conditioned with PACl. The beneficial effect of waste sludge conditioning with PACl is proposed to be the result of the bound water associated with the aluminium polymers in PACl solutions which act as a type of lubrication for the intrinsically sticky sludge solids during the course of drying. It can be anticipated that PACl addition to waste sludge will become a technically feasible and very effective method to avoid worldwide fouling problems in direct sludge dryers, and to reduce torque issues in indirect sludge dryers as well as in sludge decanter centrifuges. Copyright © 2013 Elsevier Ltd. All rights reserved.

  3. Minimizing residual aluminum concentration in treated water by tailoring properties of polyaluminum coagulants.

    PubMed

    Kimura, Masaoki; Matsui, Yoshihiko; Kondo, Kenta; Ishikawa, Tairyo B; Matsushita, Taku; Shirasaki, Nobutaka

    2013-04-15

    Aluminum coagulants are widely used in water treatment plants to remove turbidity and dissolved substances. However, because high aluminum concentrations in treated water are associated with increased turbidity and because aluminum exerts undeniable human health effects, its concentration should be controlled in water treatment plants, especially in plants that use aluminum coagulants. In this study, the effect of polyaluminum chloride (PACl) coagulant characteristics on dissolved residual aluminum concentrations after coagulation and filtration was investigated. The dissolved residual aluminum concentrations at a given coagulation pH differed among the PACls tested. Very-high-basicity PACl yielded low dissolved residual aluminum concentrations and higher natural organic matter (NOM) removal. The low residual aluminum concentrations were related to the low content of monomeric aluminum (Ala) in the PACl. Polymeric (Alb)/colloidal (Alc) ratio in PACl did not greatly influence residual aluminum concentration. The presence of sulfate in PACl contributed to lower residual aluminum concentration only when coagulation was performed at around pH 6.5 or lower. At a wide pH range (6.5-8.5), residual aluminum concentrations <0.02 mg/L were attained by tailoring PACl properties (Ala percentage ≤0.5%, basicity ≥85%). The dissolved residual aluminum concentrations did not increase with increasing the dosage of high-basicity PACl, but did increase with increasing the dosage of normal-basicity PACl. We inferred that increasing the basicity of PACl afforded lower dissolved residual aluminum concentrations partly because the high-basicity PACls could have a small percentage of Ala, which tends to form soluble aluminum-NOM complexes with molecular weights of 100 kDa-0.45 μm. Copyright © 2013 Elsevier Ltd. All rights reserved.

  4. Characteristics of BPA removal from water by PACl-Al13 in coagulation process.

    PubMed

    Xiaoying, Ma; Guangming, Zeng; Chang, Zhang; Zisong, Wang; Jian, Yu; Jianbing, Li; Guohe, Huang; Hongliang, Liu

    2009-09-15

    This paper discussed the coagulation characteristics of BPA with polyaluminum chloride (PACl-Al(13)) as coagulant, examined the impact of coagulation pH, PACl-Al(13) dosage, TOC (total organic carbon) and turbidity on BPA removal, and analyzed the possible dominant mechanisms in water coagulation process. Formation and performance of flocs during coagulation processes were monitored using photometric dispersion analyzer (PDA). When the concentration of humic acid matters and turbidity was low in the solution, the experimental results showed that the removal of BPA experienced increase and subsequently decrease with the PACl-Al(13) dosage increasing. The optimal PACl-Al(13) dosage was found at BPA/PACl-Al(13)=1:2.6(M/M) under our experiment conditions. Results show that the maximum BPA removal efficiency occurred at pH 9.0 due to the adsorption by Al(13) aggregates onto BPA rather than charge neutralization mechanism by polynuclear aluminous salts in the solution. The humic acid matters and kaolin in the solution have significant effect on BPA removal with PACl-Al(13) in the coagulation. The BPA removal will be weakened at high humic matters. The removal rate of BPA increased and subsequently decreased with the turbidity increasing.

  5. Augmented classical least squares multivariate spectral analysis

    DOEpatents

    Haaland, David M.; Melgaard, David K.

    2004-02-03

    A method of multivariate spectral analysis, termed augmented classical least squares (ACLS), provides an improved CLS calibration model when unmodeled sources of spectral variation are contained in a calibration sample set. The ACLS methods use information derived from component or spectral residuals during the CLS calibration to provide an improved calibration-augmented CLS model. The ACLS methods are based on CLS so that they retain the qualitative benefits of CLS, yet they have the flexibility of PLS and other hybrid techniques in that they can define a prediction model even with unmodeled sources of spectral variation that are not explicitly included in the calibration model. The unmodeled sources of spectral variation may be unknown constituents, constituents with unknown concentrations, nonlinear responses, non-uniform and correlated errors, or other sources of spectral variation that are present in the calibration sample spectra. Also, since the various ACLS methods are based on CLS, they can incorporate the new prediction-augmented CLS (PACLS) method of updating the prediction model for new sources of spectral variation contained in the prediction sample set without having to return to the calibration process. The ACLS methods can also be applied to alternating least squares models. The ACLS methods can be applied to all types of multivariate data.

  6. Augmented Classical Least Squares Multivariate Spectral Analysis

    DOEpatents

    Haaland, David M.; Melgaard, David K.

    2005-07-26

    A method of multivariate spectral analysis, termed augmented classical least squares (ACLS), provides an improved CLS calibration model when unmodeled sources of spectral variation are contained in a calibration sample set. The ACLS methods use information derived from component or spectral residuals during the CLS calibration to provide an improved calibration-augmented CLS model. The ACLS methods are based on CLS so that they retain the qualitative benefits of CLS, yet they have the flexibility of PLS and other hybrid techniques in that they can define a prediction model even with unmodeled sources of spectral variation that are not explicitly included in the calibration model. The unmodeled sources of spectral variation may be unknown constituents, constituents with unknown concentrations, nonlinear responses, non-uniform and correlated errors, or other sources of spectral variation that are present in the calibration sample spectra. Also, since the various ACLS methods are based on CLS, they can incorporate the new prediction-augmented CLS (PACLS) method of updating the prediction model for new sources of spectral variation contained in the prediction sample set without having to return to the calibration process. The ACLS methods can also be applied to alternating least squares models. The ACLS methods can be applied to all types of multivariate data.

  7. Augmented Classical Least Squares Multivariate Spectral Analysis

    DOEpatents

    Haaland, David M.; Melgaard, David K.

    2005-01-11

    A method of multivariate spectral analysis, termed augmented classical least squares (ACLS), provides an improved CLS calibration model when unmodeled sources of spectral variation are contained in a calibration sample set. The ACLS methods use information derived from component or spectral residuals during the CLS calibration to provide an improved calibration-augmented CLS model. The ACLS methods are based on CLS so that they retain the qualitative benefits of CLS, yet they have the flexibility of PLS and other hybrid techniques in that they can define a prediction model even with unmodeled sources of spectral variation that are not explicitly included in the calibration model. The unmodeled sources of spectral variation may be unknown constituents, constituents with unknown concentrations, nonlinear responses, non-uniform and correlated errors, or other sources of spectral variation that are present in the calibration sample spectra. Also, since the various ACLS methods are based on CLS, they can incorporate the new prediction-augmented CLS (PACLS) method of updating the prediction model for new sources of spectral variation contained in the prediction sample set without having to return to the calibration process. The ACLS methods can also be applied to alternating least squares models. The ACLS methods can be applied to all types of multivariate data.

  8. Coagulation/flocculation process with polyaluminum chloride for the remediation of oil sands process-affected water: Performance and mechanism study.

    PubMed

    Wang, Chengjin; Alpatova, Alla; McPhedran, Kerry N; Gamal El-Din, Mohamed

    2015-09-01

    This study investigated the application of polyaluminum chloride (PACl) for the treatment of the oil sands process-affected water (OSPW). These coagulants are commonly used in water treatment with the most effective species reported to be Al13. PACl with 83.6% Al13 was synthesized using the slow base titration method and compared with a commercially available PACl in terms of aluminum species distribution, coagulation/flocculation (CF) performance, floc morphology, and contaminant removal. Both coagulants were effective in removing suspended solids, achieving over 96% turbidity removal at all applied coagulant doses (0.5-3.0 mM Al). The removal efficiencies of metals varied among different metals depending on their pKa values with metal cations having pKa values (Fe, Al, Ga, and Ti) below OSPW pH of 6.9-8.1 (dose dependent) being removed by more than 90%, while cations with higher pKa values (K, Na, Ca, Mg and Ni) had removals of less than 40%. Naphthenic acids were not removed due to their low molecular weights, negative charges, and hydrophilic characteristics at the OSPW pH. At the highest applied coagulant dose of 3.0 mM Al, the synthetic PACl reduced Vibrio fischeri inhibition effect to 43.3 ± 3.0% from 49.5 ± 0.4% in raw OSPW. In contrast, no reduction of toxicity was found for OSPW treated with the commercial PACl. Based on water quality and floc analyses, the dominant CF mechanism for particle removal during OSPW treatment was considered to be enmeshment in the precipitates (i.e., sweep flocculation). Overall, the CF using synthesized PACl can be a valuable pretreatment process for OSPW to create wastewater that is more easily treated by downstream processes. Copyright © 2015 Elsevier Ltd. All rights reserved.

  9. Harvesting Chlorella vulgaris by magnetic flocculation using Fe₃O₄ coating with polyaluminium chloride and polyacrylamide.

    PubMed

    Zhao, Yuan; Liang, Wenyan; Liu, Lijun; Li, Feizhen; Fan, Qianlong; Sun, Xiaoli

    2015-12-01

    The harvesting of Chlorella vulgaris was investigated by magnetic flocculation, where the natural magnetite was used as magnetic seeds and the polyaluminium chloride (PACl) and polyacrylamide (PAM) were used as the coating polymer on the Fe3O4 surface. The composite modes of PACl, PAM, and Fe3O4 and their effects on harvesting were studied. The results showed that adding the composite PACl/Fe3O4 first (at (0.625 mmol Al/L)/(10 g/L)) followed by the addition PAM (at 3mg/L) was the optimum dosing strategy. Following this strategy, 99% of cells could be harvested in less than 0.5 min, and it could overcome negative impacts from pH and algal organic matter. Compared to PACl, ζ-potentials of PACl/Fe3O4 were found to be increased substantially from -4.9-8.5 mV to 1.5-19.5 mV at pH range 2.1-12.3. The charge neutralization of PACl/Fe3O4 and sweeping of PAM play an important role in magnetic harvesting of microalgal cells. Copyright © 2015 Elsevier Ltd. All rights reserved.

  10. Coagulation of micro-polluted Pearl River water with IPF-PACls.

    PubMed

    Xu, Yi; Sun, Wei; Wang, Dong-sheng; Tang, Hong-xiao

    2004-01-01

    Water samples collected from early March 2001 to the end of April 2002 at the branch of Pearl River around the Guangzhou City were analyzed for its micro-polluted characteristics. The coagulation behavior of polyaluminum chlorides (PACls) was then examined focusing on the effect of primary water quality and speciation distribution. The results showed that PACls exhibit better coagulation efficiency than alum in accordance with the different speciation. The turbidity removal property of PACls is evidently better than alum at low dosage. While in neutral zone (about 6.5-7.5), the turbidity removal of PACls decreases owing to the restabilization of particles at higher dosage. The organic matters in raw water exhibit marked influence on coagulation. In acidic zone, organic matters complex with polymer species and promote the formation of flocs. With an increase in pH, the complexation of organics with polymer species gradually decreases, and the removal of organics mainly depends on adsorption. The effect is evidently improved with the raise of B value.

  11. Comparison of coagulation pretreatment of produced water from natural gas well by polyaluminium chloride and polyferric sulphate coagulants.

    PubMed

    Zhai, Jun; Huang, Zejin; Rahaman, Md Hasibur; Li, Yue; Mei, Longyue; Ma, Hongpu; Hu, Xuebin; Xiao, Haiwen; Luo, Zhiyong; Wang, Kunping

    2017-05-01

    This study aimed to optimise coagulation pretreatment of the produced water (PW) collected from a natural gas field. Two coagulants, polyferric sulphate (PFS) and polyaluminium chloride (PACl), were applied separately for the organics, suspended solids (SS), and colour removal. Treatment performance at different coagulant dosages, initial pH values, stirring patterns, and the addition of cationic polyacrylamide (PAM) was investigated in jar tests. The optimal coagulation conditions were dosage of PACl 25 g/L or PFS 20 g/L with that of PAM 30 mg/L, initial pH of 11, and fast mixing of 1.5 min (for PACl) or 2 min (for PFS) at 250 rpm followed by slow mixing of 15 min at 50 rpm for both coagulants. PACl performed better than PFS to remove chemical oxygen demand (COD), total organic carbon (TOC), SS, and colour, and achieved a removal efficiency of 90.1%, 89.4%, 99.0%, and 99.9%, respectively, under the optimal condition; while PFS efficiency was 86.1%, 86.1%, 99.0%, and 98.2%, respectively. However, oil removal was higher in PFS coagulation compared to PACl and showed 98.9% and 95.3%, respectively. Biodegradability, ratio of the biological oxygen demand (five-day) (BOD 5 )/COD, of the PW after pretreatment increased from 0.08 to 0.32 for PFS and 0.43 for PACl. Zeta potential (Z-potential) analysis at the optimum coagulant dosage of PACl and PFS suggests that charge neutralisation was the predominant mechanism during coagulation. Better efficiency was observed at higher pH. The addition of PAM and starring pattern had a minor influence on the removal performance of both coagulants. The results suggest that PACl or PFS can be applied for the pretreatment of PW, which can provide substantial removal of carbon, oil, and colour, a necessary first step for subsequent main treatment units such as chemical oxidation or biological treatment.

  12. Elimination of representative contaminant candidate list viruses, coxsackievirus, echovirus, hepatitis A virus, and norovirus, from water by coagulation processes.

    PubMed

    Shirasaki, N; Matsushita, T; Matsui, Y; Murai, K; Aochi, A

    2017-03-15

    We examined the removal of representative contaminant candidate list (CCL) viruses (coxsackievirus [CV] B5, echovirus type [EV] 11, and hepatitis A virus [HAV] IB), recombinant norovirus virus-like particles (rNV-VLPs), and murine norovirus (MNV) type 1 by coagulation. Water samples were subjected to coagulation with polyaluminum chloride (PACl, basicity 1.5) followed by either settling or settling and filtration. Together with our previously published results, the removal ratio order, as evaluated by a plaque-forming-unit method or an enzyme-linked immunosorbent assay after settling, was HAV>EV=rNV-VLPs≥CV=poliovirus type 1=MNV>adenovirus type 40 (range, 0.1-2.7-log 10 ). Infectious HAV was likely inactivated by the PACl and therefore was removed to a greater extent than the other viruses. A nonsulfated high-basicity PACl (basicity 2.1), removed the CCL viruses more efficiently than did two other sulfated PACls (basicity 1.5 or 2.1), alum, or ferric chloride. We also examined the removal ratio of two bacteriophages. The removal ratios for MS2 tended to be larger than those of the CCL viruses, whereas those for φX174 were comparable with or smaller than those of the CCL viruses. Therefore, φX174 may be a useful conservative surrogate for CCL viruses during coagulation. Copyright © 2016 Elsevier B.V. All rights reserved.

  13. Temporary placement of a paclitaxel or rapamycin-eluting stent is effective to reduce stenting induced inflammatory reaction and scaring in benign cardia stricture models.

    PubMed

    Wang, Lin; Zhu, Yue-Qi; Cheng, Ying-Sheng; Cui, Wen-Guo; Chen, Ni-Wei

    2014-12-01

    To investigate whether temporary placement of a paclitaxel or rapamycin eluting stent is more effective to reduce stenting induced inflammatory reaction and scaring than a bared stent in benign cardia stricture models. Eighty dog models of stricture were randomly divided into a control group (CG, n=20, no stent insertion), a bare stent group (BSG, n=20), a paclitaxel eluting (Pacl-ESG, n=20) and a rapamycin eluting stent group (Rapa-ESG, n=20), with one-week stent retention. Lower-oesophageal-sphincter pressure (LOSP), 5-minute barium height (5-mBH) and cardia diameter were assessed before, immediately after the procedure, and regularly for 6 months. Five dogs in each group were euthanized for histological examination at each follow-up assessment. Stent insertion was well tolerated, with similar migration rates in three groups. At 6 months, LOSP and 5-mBH improved in Pacl-ESG and Rapa-ESG compared to BSG (p<0.05), with no difference between Pacl-ESG and Rapa-ESG (p>0.05). Cardia kept more patency in the Pacl-ESG and Rapa-ESG than in BSG (p<0.05). Reduced peak inflammatory reactions and scarring occurred in the Pacl-ESG and Rapa-ESG compared to BSG (p<0.05), with a similar outcome in the Pacl-ESG and Rapa-ESG (p>0.05). Paclitaxel or rapamycin-eluting stents insertion led to better outcomes than bare stents in benign cardia stricture models.

  14. Pilot testing of dissolved air flotation (DAF) in a highly effective coagulation-flocculation integrated (FRD) system.

    PubMed

    Wang, Yili; Guo, Jinlong; Tang, Hongxiao

    2002-01-01

    Factors of pretreatment coagulation/flocculation units were studied using raw water of low temperature and low turbidity. Aluminum sulfate (AS) and selected polyaluminium chlorides (PACls) were all effective in the DAF process when used under favorable conditions of coagulant addition, coagulation, flocculation and flotation units. Compared with the AS coagulant, PACls, at lower dosage, could give the same effective performance even with shorter coagulation/flocculation time or lower recycle ratio during the treatment of cold water. This is attributed to the higher-charged polymeric Al species, and the lower hydrophilic and more compact flocculated flocs of PACl coagulant. Based on results of pilot experiments, the goal of FRD system can be achieved by combining a DAF heterocoagulation reactor with PACl coagulant (F), an efficient flocculation reactor (R), as well as an economical auto-dosing system (D).

  15. Reduction by enhanced coagulation of dissolved organic nitrogen as a precursor of N-nitrosodimethylamine.

    PubMed

    Tongchang, Phanawan; Kumsuvan, Jindalak; Phatthalung, Warangkana Na; Suksaroj, Chaisri; Wongrueng, Aunnop; Musikavong, Charongpun

    2018-05-12

    Raw water from the Banglen (BL) water treatment plant (WTP) and Bangkhen (BK) WTP in central Thailand and Hatyai (HY) WTP in southern Thailand was investigated for dissolved organic nitrogen (DON) reduction. The DON(mg N/L) and the dissolved organic carbon (DOC)/DON ratio were 0.34 and 21, 0.24 and 18, and 1.12 and 3 for the raw waters from BL, BK, and HY WTPs, respectively. Polyaluminum chloride (PACl) dosages of 150, 80, and 40 mg/L at pH 7 were the optimal coagulation conditions for the raw waters from BL, BK, and HY WTPs, respectively, and could reduce DON by 50%, 42%, and 42%, respectively. PACl and powder activated carbon (PAC, both in mg/L) at 150 and 20, 80 and 20, and 40 and 60 could reduce DON in the raw waters from BL, BK, and HY WTPs by 71%, 67%, and 29%, respectively. DOC/DON values of water treated with PACl were similar to those of raw water. DOC/DON values of water treated with PACl and PAC were lower than those of raw water. N-nitrosodimethylamine (NDMA) formation potentials of raw water, water treated with PACl, or both PACl and PAC, and organic fractions of BL, BK, and HY WTPs were below the detection limits of 542 and 237 ng/L, respectively. Reductions in fluorescence intensities of tryptophan-like substances at peaks 240/350 and 280/350 (nm Ex /nm Em ) were moderately (correlation coefficient, R = 0.85 and 0.86) and fairly (R = 0.59, 0.67, and 0.75) correlated with DON reduction.

  16. A comparison of the efficacy of organic and mixed-organic polymers with polyaluminium chloride in chemically assisted primary sedimentation (CAPS).

    PubMed

    De Feo, G; Galasso, M; Landi, R; Donnarumma, A; De Gisi, S

    2013-01-01

    CAPS is the acronym for chemically assisted primary sedimentation, which consists of adding chemicals to raw urban wastewater to increase the efficacy of coagulation, flocculation and sedimentation. The principal benefits of CAPS are: upgrading of urban wastewater treatment plants; increasing efficacy of primary sedimentation; and the major production of energy from the anaerobic digestion of primary sludge. Metal coagulants are usually used because they are both effective and cheap, but they can cause damage to the biological processes of anaerobic digestion. Generally, biodegradable compounds do not have these drawbacks, but they are comparatively more expensive. Both metal coagulants and biodegradable compounds have preferential and penalizing properties in terms of CAPS application. The problem can be solved by means of a multi-criteria analysis. For this purpose, a series of tests was performed in order to compare the efficacy of several organic and mixed-organic polymers with that of polyaluminium chloride (PACl) under specific conditions. The multi-criteria analysis was carried out coupling the simple additive weighting method with the paired comparison technique as a tool to evaluate the criteria priorities. Five criteria with the following priorities were used: chemical oxygen demand (COD) removal > turbidity, SV60 > coagulant dose, and coagulant cost. The PACl was the best alternative in 70% of the cases. The CAPS process using PACl made it possible to obtain an average COD removal of 68% compared with 38% obtained, on average, with natural sedimentation and 61% obtained, on average, with the best PACl alternatives (cationic polyacrylamide, natural cationic polymer, dicyandiamide resin).

  17. Investigation of enteric adenovirus and poliovirus removal by coagulation processes and suitability of bacteriophages MS2 and φX174 as surrogates for those viruses.

    PubMed

    Shirasaki, N; Matsushita, T; Matsui, Y; Marubayashi, T; Murai, K

    2016-09-01

    We evaluated the removal of enteric adenovirus (AdV) type 40 and poliovirus (PV) type 1 by coagulation, using water samples from 13 water sources for drinking water treatment plants in Japan. The behaviors of two widely accepted enteric virus surrogates, bacteriophages MS2 and φX174, were compared with the behaviors of AdV and PV. Coagulation with polyaluminum chloride (PACl, basicity 1.5) removed AdV and PV from virus-spiked source waters: the infectious AdV and PV removal ratios evaluated by means of a plaque-forming-unit method were 0.1-1.4-log10 and 0.5-2.4-log10, respectively. A nonsulfated high-basicity PACl (basicity 2.1) removed infectious AdV and PV more efficiently than did other commercially available PACls (basicity 1.5-2.1), alum, and ferric chloride. The MS2 removal ratios tended to be larger than those of AdV and PV, partly because of differences in the hydrophobicities of the virus particles and the sensitivity of the virus to the virucidal activity of PACl; the differences in removal ratios were not due to differences in the surface charges of the virus particles. MS2, which was more hydrophobic than the other viruses, was inactivated during coagulation with PACl. Therefore, MS2 does not appear to be an appropriate surrogate for AdV and PV during coagulation. In contrast, because φX174, like AdV and PV, was not inactivated during coagulation, and because the hydrophobicity of φX174 was similar to or somewhat lower than the hydrophobicities of AdV and PV, the φX174 removal ratios tended to be similar to or somewhat smaller than those of the enteric viruses. Therefore, φX174 is a potential conservative surrogate for AdV and PV during coagulation. In summary, the surface hydrophobicity of virus particles and the sensitivity of the virus to the virucidal activity of the coagulant are probably important determinants of the efficiency of virus removal during coagulation. Copyright © 2016 Elsevier B.V. All rights reserved.

  18. Membrane fouling control and enhanced removal of pharmaceuticals and personal care products by coagulation-MBR.

    PubMed

    Park, Junwon; Yamashita, Naoyuki; Tanaka, Hiroaki

    2018-04-01

    We investigated the effects of the addition of two coagulants-polyaluminium chloride (PACl) and chitosan-into the membrane bioreactor (MBR) process on membrane fouling and the removal of pharmaceuticals and personal care products (PPCPs). Their addition at optimized dosages improved the permeability of the membrane by reducing the concentration of soluble microbial products in mixed liquor, the content of inorganic elements, and irreversible fouling of the membrane surface. During long-term operation, the addition of PACl increased removal efficiencies of tetracycline, mefenamic acid, atenolol, furosemide, ketoprofen, and diclofenac by 17-23%. The comparative evaluation using mass balance calculations between coagulation-MBR (with PACl addition) and control-MBR (without PACl addition) showed that enhanced biodegradability played a key role in improving removal efficiencies of some PPCPs in coagulation-MBR. Coagulation-MBR also had higher oxygen uptake rates and specific nitrification rates of microorganisms. Overall, our findings suggest that the combination of MBR with coagulation reduced membrane fouling, lengthening operation period of the membrane, and improved the removal of some PPCPs as a result of enhanced biodegradability. Copyright © 2018 Elsevier Ltd. All rights reserved.

  19. Polyaluminium chloride as an alternative to alum for the direct filtration of drinking water.

    PubMed

    Zarchi, Idit; Friedler, Eran; Rebhun, Menahem

    2013-01-01

    The efficiency of various polyaluminium chloride coagulants (PACls) was compared to the efficiency of aluminium sulfate (alum) in the coagulation-flocculation process preceding direct filtration in drinking water treatment. The comparative study consisted of two separate yet complementary series of experiments: the first series included short (5-7 h) and long (24 h) filter runs conducted at a pilot filtration plant equipped with large filter columns that simulated full-scale filters. Partially treated surface water from the Sea of Galilee, characterized by very low turbidity (-1 NTU), was used. In the second series of experiments, speciation of aluminium in situ was investigated using the ferron assay method. Results from the pilot-scale study indicate that most PACls were as or more efficient a coagulant as alum for direct filtration of surface water without requiring acid addition for pH adjustment and subsequent base addition for re-stabilizing the water. Consequently, cost analysis of the chemicals needed for the process showed that treatment with PACl would be significantly less costly than treatment with alum. The aluminium speciation experiments revealed that the performance of the coagulant is more influenced by the species present during the coagulation process than those present in the original reagents.

  20. Factorial validity of the Personality Adjective Checklist in a Dutch-speaking sample.

    PubMed

    Van den Broeck, Joke; Bastiaansen, Leen; Rossi, Gina; Dierckx, Eva; Mikolajczak-Degrauwe, Kalina; Hofmans, Joeri

    2014-01-01

    We examined the factorial structure of the Dutch version of the Personality Adjective Checklist (PACL-D) in a Belgian sample of 3,012 community-dwelling adults. Exploratory factor analyses revealed a 5-factor structure (Neurotic, Aggressive/Dominant, Introverted vs. Extraverted, Conscientious, and Cooperative), that showed considerable overlap with 3 of the Big Five factors (i.e., Neuroticism, Extraversion, and Conscientiousness). Moreover, the 5-factor structure closely resembled the structure found in the original American PACL and was equivalent across gender and age.

  1. Coagulation removal of humic acid-stabilized carbon nanotubes from water by PACl: influences of hydraulic condition and water chemistry.

    PubMed

    Ma, Si; Liu, Changli; Yang, Kun; Lin, Daohui

    2012-11-15

    Discharged carbon nanotubes (CNTs) can adsorb the widely-distributed humic acid (HA) in aquatic environments and thus be stabilized. HA-stabilized CNTs can find their way into and challenge the potable water treatment system. This study investigated the efficiency of coagulation and sedimentation techniques in the removal of the HA-stabilized multi-walled carbon nanotubes (MWCNTs) using polyaluminum chloride (PACl) as a coagulant, with a focus on the effects of hydraulic conditions and water chemistry. Stirring speeds in the mixing and reacting stages were gradually changed to examine the effect of the hydraulic conditions on the removal rate. The stirring speed in the reacting stage affected floc formation and thereby had a greater impact on the removal rate than the stirring speed in the mixing stage. Water chemistry factors such as pH and ionic strength had a significant effect on the stability of MWCNT suspension and the removal efficiency. Low pH (4-7) was favorable for saving the coagulant and maintaining high removal efficiency. High ionic strength facilitated the destabilization of the HA-stabilized MWCNTs and thereby lowered the required PACl dosage for the coagulation. However, excessively high ionic strength (higher than the critical coagulation concentration) decreased the maximum removal rate, probably by inhibiting ionic activity of PACl hydrolyzate in water. These results are expected to shed light on the potential improvement of coagulation removal of aqueous stabilized MWCNTs in water treatment systems. Copyright © 2012 Elsevier B.V. All rights reserved.

  2. Characterization of micro-flocs of NOM coagulated by PACl, alum and polysilicate-iron in terms of molecular weight and floc size.

    PubMed

    Fusheng, Li; Akira, Yuasa; Yuka, Ando

    2008-01-01

    Micro-flocs of NOM coagulated by polyaluminium chloride (PACl), alum and polysilicate-iron (PSI) were characterized by flocs size, HPSEC-based molecular weight and the captured content of coagulants-based aluminium and iron. Changes in floc composition with respect to the mass ratios of captured NOM to Al and Fe were examined. Lowering water pH to optimum levels was found to be capable of removing small NOM constituents that are generally difficult to be precipitated at neutral pH levels. For PACl and PSI, the distribution of micro flocs (0.1-5.0 microm) reached steady stage after rapid mixing for 30 seconds, with NOM being found existent within the non-coagulated fraction (d<0.1 microm) and the coagulated fraction with floc sizes above 5.0 microm (d >5.0 microm). For alum, however, the existence of NOM inside intermediate floc fractions of d = 0.1-1.0 microm, 1.0-3.0microm and 3.0-5.0 microm was confirmed.

  3. Removal of organic impurities in waste glycerol from biodiesel production process through the acidification and coagulation processes.

    PubMed

    Xie, Qiao-Guang; Taweepreda, Wirach; Musikavong, Charongpun; Suksaroj, Chaisri

    2012-01-01

    Treatment of waste glycerol, a by-product of the biodiesel production process, can reduce water pollution and bring significant economic benefits for biodiesel facilities. In the present study, hydrochloric acid (HCl) was used as acidification to convert soaps into salts and free fatty acids which were recovered after treatment. The pH value, dosages of polyaluminum chloride (PACl) and dosage of polyacrylamide (PAM) were considered to be the factors that can influence coagulation efficiency. The pH value of waste glycerol was adjusted to a pH range of 3-9. The PACl and PAM added were in the range of 1-6 g/L and 0.005-0.07 g/L. The results showed best coagulation efficiency occurs at pH 4 when dosage of PACl and PAM were 2 and 0.01 g/L. The removal of chemical oxygen demand (COD), biochemical oxygen demand (BOD(5)), total suspended solids (TSS) and soaps were 80, 68, 97 and 100%, respectively. The compositions of organic matters in the treated waste glycerol were glycerol (288 g/L), methanol (3.8 g/L), and other impurities (0.3 g/L).

  4. Inactivation of F-specific bacteriophages during flocculation with polyaluminum chloride - a mechanistic study.

    PubMed

    Kreißel, Katja; Bösl, Monika; Hügler, Michael; Lipp, Pia; Franzreb, Matthias; Hambsch, Beate

    2014-03-15

    Bacteriophages are often used as surrogates for enteric viruses in spiking experiments to determine the efficiencies of virus removal of certain water treatment measures, like e.g. flocculation or filtration steps. Such spiking experiments with bacteriophages are indispensable if the natural virus concentrations in the raw water of water treatment plants are too low to allow the determination of elimination levels over several orders of magnitude. In order to obtain reliable results from such spiking tests, it is essential that bacteriophages behave comparable to viruses and remain stable during the experiments. To test this, the influence of flocculation parameters on the bacteriophages MS2, Qβ and phiX174 was examined. Notably, the F-specific phages MS2 and Qβ were found to be inactivated in flocculation processes with polyaluminum chloride (PACl). In contrast, other aluminum coagulants like AlCl3 or Al2(SO4)3 did not show a comparable effect on MS2 in this study. In experiments testing the influence of different PACl species on MS2 and Qβ inactivation during flocculation, it could be shown that cationic dissolved PACl species (Al13) interacted with the MS2 surface and hereby reduced the surviving phage fraction to c/c0 values below 1*10(-4) even at very low PACl concentrations of 7 μmol Al/L. Other inactivation mechanisms like the irreversible adsorption of phages to the floc structure or the damage of phage surfaces due to entrapment into the floc during coagulation and floc formation do not seem to contribute to the low surviving fraction found for both F-specific bacteriophages. Furthermore, no influence of phage agglomeration or pH drops during the flocculation process on phage inactivation could be observed. The somatic coliphage phiX174 in contrast did not show sensitivity to chemical stress and in accordance only slight interaction between Al13 and the phage surface was observed. Consequently, F-specific phages like MS2 should not be used as surrogate for viruses in flocculation experiments with PACl to determine the removal rates of viruses, as the results are influenced by a strong inactivation of the bacteriophages due to the experimental conditions. Copyright © 2014 Elsevier Ltd. All rights reserved.

  5. Everolimus plus exemestane versus bevacizumab-based chemotherapy for second-line treatment of hormone receptor-positive metastatic breast cancer in Greece: An economic evaluation study.

    PubMed

    Kourlaba, Georgia; Rapti, Vasiliki; Alexopoulos, Athanasios; Relakis, John; Koumakis, Georgios; Chatzikou, Magdalini; Maniadakis, Nikos; Georgoulias, Vassilis

    2015-08-05

    The objective of our study was to conduct a cost-effectiveness (CE) study of combined everolimus (EVE) and exemestane (EXE) versus the common clinical practice in Greece for the treatment of postmenopausal women with HR+/HER2- advanced breast cancer (BC) progressing on nonsteroidal aromatase inhibitors (NSAI). The combinations of bevacizumab (BEV) plus paclitaxel (PACL) and BEV plus capecitabine (CAPE) were selected as comparators. A Markov model, consisting of three health states, was used to describe disease progression and evaluate the CE of the comparators from a third-party payer perspective over a lifetime horizon. Efficacy and safety data as well as utility values considered in the model were extracted from the relevant randomized Phase III clinical trials and other published studies. Direct medical costs referring to the year 2014 were incorporated in the model. A probabilistic sensitivity analysis was conducted to account for uncertainty and variation in the parameters of the model. Primary outcomes were patient survival (life-years), quality-adjusted life years (QALYs), total direct costs and incremental cost-effectiveness ratios (ICER). The discounted quality-adjusted survival of patients treated with EVE plus EXE was greater by 0.035 and 0.004 QALYs, compared to BEV plus PACL and BEV plus CAPE, respectively. EVE plus EXE was the least costly treatment in terms of drug acquisition, administration, and concomitant medications. The total lifetime cost per patient was estimated at €55,022, €67,980, and €62,822 for EVE plus EXE, BEV plus PACL, and BEV plus CAPE, respectively. The probabilistic analysis confirmed the deterministic results. Our results suggest that EVE plus EXE may be a dominant alternative relative to BEV plus PACL and BEV plus CAPE for the treatment of HR+/HER2- advanced BC patients failing initial therapy with NSAIs.

  6. Probing Coagulation Behavior of Individual Aluminum Species for Removing Corresponding Disinfection Byproduct Precursors: The Role of Specific Ultraviolet Absorbance

    PubMed Central

    Zhao, He; Hu, Chengzhi; Zhang, Di; Liu, Huijuan; Qu, Jiuhui

    2016-01-01

    Coagulation behavior of aluminum chloride and polyaluminum chloride (PACl) for removing corresponding disinfection byproduct (DBP) precursors was discussed in this paper. CHCl3, bromine trihalomethanes (THM-Br), dichloroacetic acid (DCAA) and trichloroacetic acid (TCAA) formation potential yields were correlated with specific ultraviolet absorbance (SUVA) values in different molecular weight (MW) fractions of humic substances (HS), respectively. Correlation analyses and principal component analysis were performed to examine the relationships between SUVA and different DBP precursors. To acquire more structural characters of DBP precursors and aluminum speciation, freeze-dried precipitates were analyzed by fourier transform infrared (FTIR) and C 1s, Al 2p X-ray photoelectron spectroscopy (XPS). The results indicated that TCAA precursors (no MW limits), DCAA and CHCl3 precursors in low MW fractions (MW<30 kDa) had a relatively good relations with SUVA values. These DBP precursors were coagulated more easily by in situ Al13 of AlCl3 at pH 5.0. Due to relatively low aromatic content and more aliphatic structures, THM-Br precursors (no MW limits) and CHCl3 precursors in high MW fractions (MW>30 kDa) were preferentially removed by PACl coagulation with preformed Al13 species at pH 5.0. Additionally, for DCAA precursors in high MW fractions (MW>30 kDa) with relatively low aromatic content and more carboxylic structures, the greatest removal occurred at pH 6.0 through PACl coagulation with aggregated Al13 species. PMID:26824243

  7. Slaughterhouse Wastewater Treatment by Combined Chemical Coagulation and Electrocoagulation Process

    PubMed Central

    Bazrafshan, Edris; Kord Mostafapour, Ferdos; Farzadkia, Mehdi; Ownagh, Kamal Aldin; Mahvi, Amir Hossein

    2012-01-01

    Slaughterhouse wastewater contains various and high amounts of organic matter (e.g., proteins, blood, fat and lard). In order to produce an effluent suitable for stream discharge, chemical coagulation and electrocoagulation techniques have been particularly explored at the laboratory pilot scale for organic compounds removal from slaughterhouse effluent. The purpose of this work was to investigate the feasibility of treating cattle-slaughterhouse wastewater by combined chemical coagulation and electrocoagulation process to achieve the required standards. The influence of the operating variables such as coagulant dose, electrical potential and reaction time on the removal efficiencies of major pollutants was determined. The rate of removal of pollutants linearly increased with increasing doses of PACl and applied voltage. COD and BOD5 removal of more than 99% was obtained by adding 100 mg/L PACl and applied voltage 40 V. The experiments demonstrated the effectiveness of chemical and electrochemical techniques for the treatment of slaughterhouse wastewaters. Consequently, combined processes are inferred to be superior to electrocoagulation alone for the removal of both organic and inorganic compounds from cattle-slaughterhouse wastewater. PMID:22768233

  8. Slaughterhouse wastewater treatment by combined chemical coagulation and electrocoagulation process.

    PubMed

    Bazrafshan, Edris; Kord Mostafapour, Ferdos; Farzadkia, Mehdi; Ownagh, Kamal Aldin; Mahvi, Amir Hossein

    2012-01-01

    Slaughterhouse wastewater contains various and high amounts of organic matter (e.g., proteins, blood, fat and lard). In order to produce an effluent suitable for stream discharge, chemical coagulation and electrocoagulation techniques have been particularly explored at the laboratory pilot scale for organic compounds removal from slaughterhouse effluent. The purpose of this work was to investigate the feasibility of treating cattle-slaughterhouse wastewater by combined chemical coagulation and electrocoagulation process to achieve the required standards. The influence of the operating variables such as coagulant dose, electrical potential and reaction time on the removal efficiencies of major pollutants was determined. The rate of removal of pollutants linearly increased with increasing doses of PACl and applied voltage. COD and BOD(5) removal of more than 99% was obtained by adding 100 mg/L PACl and applied voltage 40 V. The experiments demonstrated the effectiveness of chemical and electrochemical techniques for the treatment of slaughterhouse wastewaters. Consequently, combined processes are inferred to be superior to electrocoagulation alone for the removal of both organic and inorganic compounds from cattle-slaughterhouse wastewater.

  9. Characterisation of landfill leachate by EEM-PARAFAC-SOM during physical-chemical treatment by coagulation-flocculation, activated carbon adsorption and ion exchange.

    PubMed

    Oloibiri, Violet; De Coninck, Sam; Chys, Michael; Demeestere, Kristof; Van Hulle, Stijn W H

    2017-11-01

    The combination of fluorescence excitation-emission matrices (EEM), parallel factor analysis (PARAFAC) and self-organizing maps (SOM) is shown to be a powerful tool in the follow up of dissolved organic matter (DOM) removal from landfill leachate by physical-chemical treatment consisting of coagulation, granular activated carbon (GAC) and ion exchange. Using PARAFAC, three DOM components were identified: C1 representing humic/fulvic-like compounds; C2 representing tryptophan-like compounds; and C3 representing humic-like compounds. Coagulation with ferric chloride (FeCl 3 ) at a dose of 7 g/L reduced the maximum fluorescence of C1, C2 and C3 by 52%, 17% and 15% respectively, while polyaluminium chloride (PACl) reduced C1 only by 7% at the same dose. DOM removal during GAC and ion exchange treatment of raw and coagulated leachate exhibited different profiles. At less than 2 bed volumes (BV) of treatment, the humic components C1 and C3 were rapidly removed, whereas at BV ≥ 2 the tryptophan-like component C2 was preferentially removed. Overall, leachate treated with coagulation +10.6 BV GAC +10.6 BV ion exchange showed the highest removal of C1 (39% - FeCl 3 , 8% - PACl), C2 (74% - FeCl 3 , 68% - PACl) and no C3 removal; whereas only 52% C2 and no C1 and C3 removal was observed in raw leachate treated with 10.6 BV GAC + 10.6 BV ion exchange only. Analysis of PARAFAC-derived components with SOM revealed that coagulation, GAC and ion exchange can treat leachate at least 50% longer than only GAC and ion exchange before the fluorescence composition of leachate remains unchanged. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. [Influencing factors and mechanism of arsenic removal during the aluminum coagulation process].

    PubMed

    Chen, Gui-Xia; Hu, Cheng-Zhi; Zhu, Ling-Feng; Tong, Hua-Qing

    2013-04-01

    Aluminum coagulants are widely used in arsenic (As) removal during the drinking water treatment process. Aluminium chloride (AlCl3) and polyaluminium chloride (PACl) which contains high content of Al13 were used as coagulants. The effects of aluminum species, pH, humic acid (HA) and coexisting anions on arsenic removal were investigated. Results showed that AlCl3 and PACl were almost ineffective in As(II) removal while the As(V) removal efficiency reached almost 100%. pH was an important influencing factor on the arsenic removal efficiency, because pH influenced the distribution of aluminum species during the coagulation process. The efficiency of arsenic removal by aluminum coagulants was positively correlated with the content of Al13 species. HA and some coexisting anions showed negative impact on arsenic removal because of the competitive adsorption. The negative influence of HA was more pronounced at low coagulant dosages. PO4(3-) and F(-) showed marked influence during arsenic removal, but there was no obvious influence when SiO3(2-), CO3(2-) and SO4(2-) coexisted. The present study would be helpful to direct arsenic removal by enhanced coagulation during the drinking water treatment.

  11. Conventional drinking water treatment and direct biofiltration for the removal of pharmaceuticals and artificial sweeteners: A pilot-scale approach.

    PubMed

    McKie, Michael J; Andrews, Susan A; Andrews, Robert C

    2016-02-15

    The presence of endocrine disrupting compounds (EDCs), pharmaceutically active compounds (PhACs) and artificial sweeteners are of concern to water providers because they may be incompletely removed by wastewater treatment processes and they pose an unknown risk to consumers due to long-term consumption of low concentrations of these compounds. This study utilized pilot-scale conventional and biological drinking water treatment processes to assess the removal of nine PhACs and EDCs, and two artificial sweeteners. Conventional treatment (coagulation, flocculation, settling, non-biological dual-media filtration) was compared to biofilters with or without the addition of in-line coagulant (0.2-0.8 mg Al(3+)/L; alum or PACl). A combination of biofiltration, with or without in-line alum, and conventional filtration was able to reduce 7 of the 9 PhACs and EDCs by more than 50% from river water while artificial sweeteners were inconsistently removed by conventional treatment or biofiltration. Increasing doses of PACl from 0 to 0.8 mg/L resulted in average removals of PhACs, EDCs increasing from 39 to 70% and artificial sweeteners removal increasing from ~15% to ~35% in lake water. These results suggest that a combination of biological, chemical and physical treatment can be applied to effectively reduce the concentration of EDCs, PhACs, and artificial sweeteners. Copyright © 2015 Elsevier B.V. All rights reserved.

  12. Evaluation of the suitability of a plant virus, pepper mild mottle virus, as a surrogate of human enteric viruses for assessment of the efficacy of coagulation-rapid sand filtration to remove those viruses.

    PubMed

    Shirasaki, N; Matsushita, T; Matsui, Y; Yamashita, R

    2018-02-01

    Here, we evaluated the removal of three representative human enteric viruses - adenovirus (AdV) type 40, coxsackievirus (CV) B5, and hepatitis A virus (HAV) IB - and one surrogate of human caliciviruses - murine norovirus (MNV) type 1 - by coagulation-rapid sand filtration, using water samples from eight water sources for drinking water treatment plants in Japan. The removal ratios of a plant virus (pepper mild mottle virus; PMMoV) and two bacteriophages (MS2 and φX174) were compared with the removal ratios of human enteric viruses to assess the suitability of PMMoV, MS2, and φX174 as surrogates for human enteric viruses. The removal ratios of AdV, CV, HAV, and MNV, evaluated via the real-time polymerase chain reaction (PCR) method, were 0.8-2.5-log 10 when commercially available polyaluminum chloride (PACl, basicity 1.5) and virgin silica sand were used as the coagulant and filter medium, respectively. The type of coagulant affected the virus removal efficiency, but the age of silica sand used in the rapid sand filtration did not. Coagulation-rapid sand filtration with non-sulfated, high-basicity PACls (basicity 2.1 or 2.5) removed viruses more efficiently than the other aluminum-based coagulants. The removal ratios of MS2 were sometimes higher than those of the three human enteric viruses and MNV, whereas the removal ratios of φX174 tended to be smaller than those of the three human enteric viruses and MNV. In contrast, the removal ratios of PMMoV were similar to and strongly correlated with those of the three human enteric viruses and MNV. Thus, PMMoV appears to be a suitable surrogate for human enteric viruses for the assessment of the efficacy of coagulation-rapid sand filtration to remove viruses. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Experience in Using a Finite Element Stress and Vibration Package on a Minicomputer,

    DTIC Science & Technology

    1982-01-01

    as the Gra’phics Oricntat.ed Interactive Finite Element Time Sharing Pacl’age ( GIFTS ). This packge has been running on a PDP11/60 minicomputer...Unlike many other FEM packages, GIFTS consists of a collecticon E of fully compatible special purpose programns operating on a se. ef files on disk known...matrix is initiated by running the appropriate ptrojrF:’. from the GIFTS library. The following if, a list of the major (IFtS library programs with a

  14. Interactions of specific extracellular organic matter and polyaluminum chloride and their roles in the algae-polluted water treatment.

    PubMed

    Tang, Xiaomin; Zheng, Huaili; Gao, Baoyu; Zhao, Chuanliang; Liu, Bingzhi; Chen, Wei; Guo, Jinsong

    2017-06-15

    Extracellular organic matter (EOM) is ubiquitous in the algae-polluted water and has a significant impact on the human health and drinking water treatment. We investigate the different characteristics of dissolved extracellular organic matter (dEOM) and bound extracellular organic matter (bEOM) recovered from the various growth period of Microcystis aeruginosa and the interactions of them and polyaluminum chloride (PACl). The roles of the different EOM in the algae-polluted water treatment are also discussed. The functional groups of aromatic, OH, NH, CN and NO in bEOM possessing the stronger interaction with hydroxyl aluminum compared with dEOM is responsible for bEOM and algae removal. Some low molecular weight (MW) organic components and protein-like substances in bEOM are most easily removed. And dEOM weakly reacts with PACl or inhibits coagulation, especially dEOM with the high MW organic components. The main coagulation mechanisms of bEOM are the generation of insoluble Al-bEOM through complexation, the bridge of AlO 4 Al 12 (OH) 24 (H 2 O) 12 7+ (Al 13 ), the adsorption of Al(OH) 3(am) and the entrapment of flocs. The adsorption of Al 13 and Al(OH) 3(am) mainly contribute to dEOM removal. It is also recommended to treat the algae with dEOM and bEOM at the initial stage. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. An improved OpenSim gait model with multiple degrees of freedom knee joint and knee ligaments.

    PubMed

    Xu, Hang; Bloswick, Donald; Merryweather, Andrew

    2015-08-01

    Musculoskeletal models are widely used to investigate joint kinematics and predict muscle force during gait. However, the knee is usually simplified as a one degree of freedom joint and knee ligaments are neglected. The aim of this study was to develop an OpenSim gait model with enhanced knee structures. The knee joint in this study included three rotations and three translations. The three knee rotations and mediolateral translation were independent, with proximodistal and anteroposterior translations occurring as a function of knee flexion/extension. Ten elastic elements described the geometrical and mechanical properties of the anterior and posterior cruciate ligaments (ACL and PCL), and the medial and lateral collateral ligaments (MCL and LCL). The three independent knee rotations were evaluated using OpenSim to observe ligament function. The results showed that the anterior and posterior bundles of ACL and PCL (aACL, pACL and aPCL, pPCL) intersected during knee flexion. The aACL and pACL mainly provided force during knee flexion and adduction, respectively. The aPCL was slack throughout the range of three knee rotations; however, the pPCL was utilised for knee abduction and internal rotation. The LCL was employed for knee adduction and rotation, but was slack beyond 20° of knee flexion. The MCL bundles were mainly used during knee adduction and external rotation. All these results suggest that the functions of knee ligaments in this model approximated the behaviour of the physical knee and the enhanced knee structures can improve the ability to investigate knee joint biomechanics during various gait activities.

  16. Deposition behavior of residual aluminum in drinking water distribution system: Effect of aluminum speciation.

    PubMed

    Zhang, Yue; Shi, Baoyou; Zhao, Yuanyuan; Yan, Mingquan; Lytle, Darren A; Wang, Dongsheng

    2016-04-01

    Finished drinking water usually contains some residual aluminum. The deposition of residual aluminum in distribution systems and potential release back to the drinking water could significantly influence the water quality at consumer taps. A preliminary analysis of aluminum content in cast iron pipe corrosion scales and loose deposits demonstrated that aluminum deposition on distribution pipe surfaces could be excessive for water treated by aluminum coagulants including polyaluminum chloride (PACl). In this work, the deposition features of different aluminum species in PACl were investigated by simulated coil-pipe test, batch reactor test and quartz crystal microbalance with dissipation monitoring. The deposition amount of non-polymeric aluminum species was the least, and its deposition layer was soft and hydrated, which indicated the possible formation of amorphous Al(OH)3. Al13 had the highest deposition tendency, and the deposition layer was rigid and much less hydrated, which indicated that the deposited aluminum might possess regular structure and self-aggregation of Al13 could be the main deposition mechanism. While for Al30, its deposition was relatively slower and deposited aluminum amount was relatively less compared with Al13. However, the total deposited mass of Al30 was much higher than that of Al13, which was attributed to the deposition of particulate aluminum matters with much higher hydration state. Compared with stationary condition, stirring could significantly enhance the deposition process, while the effect of pH on deposition was relatively weak in the near neutral range of 6.7 to 8.7. Copyright © 2015. Published by Elsevier B.V.

  17. Activation of the pacidamycin PacL adenylation domain by MbtH-like proteins.

    PubMed

    Zhang, Wenjun; Heemstra, John R; Walsh, Christopher T; Imker, Heidi J

    2010-11-23

    Nonribosomal peptide synthetase (NRPS) assembly lines are major avenues for the biosynthesis of a vast array of peptidyl natural products. Several hundred bacterial NRPS gene clusters contain a small (∼70-residue) protein belonging to the MbtH family for which no function has been defined. Here we show that two strictly conserved Trp residues in MbtH-like proteins contribute to stimulation of amino acid adenylation in some NRPS modules. We also demonstrate that adenylation can be stimulated not only by cognate MbtH-like proteins but also by homologues from disparate natural product pathways.

  18. Assessment of the efficacy of membrane filtration processes to remove human enteric viruses and the suitability of bacteriophages and a plant virus as surrogates for those viruses.

    PubMed

    Shirasaki, N; Matsushita, T; Matsui, Y; Murai, K

    2017-05-15

    Here, we evaluated the efficacy of direct microfiltration (MF) and ultrafiltration (UF) to remove three representative human enteric viruses (i.e., adenovirus [AdV] type 40, coxsackievirus [CV] B5, and hepatitis A virus [HAV] IB), and one surrogate of human caliciviruses (i.e., murine norovirus [MNV] type 1). Eight different MF membranes and three different UF membranes were used. We also examined the ability of coagulation pretreatment with high-basicity polyaluminum chloride (PACl) to enhance virus removal by MF. The removal ratios of two bacteriophages (MS2 and φX174) and a plant virus (pepper mild mottle virus; PMMoV) were compared with the removal ratios of the human enteric viruses to assess the suitability of these viruses to be used as surrogates for human enteric viruses. The virus removal ratios obtained with direct MF with membranes with nominal pore sizes of 0.1-0.22 μm differed, depending on the membrane used; adsorptive interactions, particularly hydrophobic interactions between virus particles and the membrane surface, were dominant factors for virus removal. In contrast, direct UF with membranes with nominal molecular weight cutoffs of 1-100 kDa effectively removed viruses through size exclusion, and >4-log 10 removal was achieved when a membrane with a nominal molecular weight cutoff of 1 kDa was used. At pH 7 and 8, in-line coagulation-MF with nonsulfated high-basicity PACls containing Al 30 species had generally a better virus removal (i.e., >4-log 10 virus removal) than the other aluminum-based coagulants, except for φX174. For all of the filtration processes, the removal ratios of AdV, CV, HAV, and MNV were comparable and strongly correlated with each other. The removal ratios of MS2 and PMMoV were comparable or smaller than those of the three human enteric viruses and MNV, and were strongly correlated with those of the three human enteric viruses and MNV. The removal ratios obtained with coagulation-MF for φX174 were markedly smaller than those obtained for the three human enteric viruses and MNV. However, because MS2 was inactivated after contact with PACl during coagulation pretreatment, unlike AdV, CV, MNV, and PMMoV, the removal ratios of infectious MS2 were probably an overestimation of the ability of coagulation-MF to remove infectious AdV, CV, and caliciviruses. Thus, PMMoV appears to be a suitable surrogate for human enteric viruses, whereas MS2 and φX174 do not, for the assessment of the efficacy of membrane filtration processes to remove viruses. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Identification of dissolved organic matter in raw water supply from reservoirs and canals as precursors to trihalomethanes formation.

    PubMed

    Musikavong, Charongpun; Wattanachira, Suraphong

    2013-01-01

    The characteristic and quantity of dissolved organic matter (DOM) as trihalomethanes precursors in water from the U-Tapao Basin, Songkhla, Thailand was investigated. The sources of water in the basin consisted of two reservoirs and the U-Tapao canal. The canal receives water discharge from reservoirs, treated and untreated wastewater from agricultural processes, communities and industries. Water downstream of the canal is utilized as a raw water supply. Water samples were collected from two reservoirs, upstream and midstream of the canal, and the raw water supply in the rainy season and summer. The DOM level in the canal water was higher than that of the reservoir water. The highest trihalomethane formation potential (THMFP) was formed in the raw water supply. Fourier-transform infrared peaks of the humic acid were detected in the reservoir and canal waters. Aliphatic hydrocarbon and organic nitrogen were the major chemical classes in the reservoir and canal water characterized by a pyrolysis gas chromatography mass spectrometer. The optimal condition of the poly aluminum chloride (PACl) coagulation was obtained at a dosage of 40 mg/L at pH 7. This condition could reduce the average UV-254 to 57%, DOC to 64%, and THMFP to 42%. In the coagulated water, peaks of O-H groups or H-bonded NH, C˭O of cyclic and acyclic compounds, ketones and quinines, aromatic C˭C, C-O of alcohols, ethers, and carbohydrates, deformation of COOH, and carboxylic acid salts were detected. The aliphatic hydrocarbon, organic nitrogen and aldehydes and ketones were the major chemical classes. These DOM could be considered as the prominent DOM for the water supply plant that utilized PACl as a coagulant.

  20. Drinking water treatment using a submerged internal-circulation membrane coagulation reactor coupled with permanganate oxidation.

    PubMed

    Zhang, Zhongguo; Liu, Dan; Qian, Yu; Wu, Yue; He, Peiran; Liang, Shuang; Fu, Xiaozheng; Li, Jiding; Ye, Changqing

    2017-06-01

    A submerged internal circulating membrane coagulation reactor (MCR) was used to treat surface water to produce drinking water. Polyaluminum chloride (PACl) was used as coagulant, and a hydrophilic polyvinylidene fluoride (PVDF) submerged hollow fiber microfiltration membrane was employed. The influences of trans-membrane pressure (TMP), zeta potential (ZP) of the suspended particles in raw water, and KMnO 4 dosing on water flux and the removal of turbidity and organic matter were systematically investigated. Continuous bench-scale experiments showed that the permeate quality of the MCR satisfied the requirement for a centralized water supply, according to the Standards for Drinking Water Quality of China (GB 5749-2006), as evaluated by turbidity (<1 NTU) and total organic carbon (TOC) (<5mg/L) measurements. Besides water flux, the removal of turbidity, TOC and dissolved organic carbon (DOC) in the raw water also increased with increasing TMP in the range of 0.01-0.05MPa. High ZP induced by PACl, such as 5-9mV, led to an increase in the number of fine and total particles in the MCR, and consequently caused serious membrane fouling and high permeate turbidity. However, the removal of TOC and DOC increased with increasing ZP. A slightly positive ZP, such as 1-2mV, corresponding to charge neutralization coagulation, was favorable for membrane fouling control. Moreover, dosing with KMnO 4 could further improve the removal of turbidity and DOC, thereby mitigating membrane fouling. The results are helpful for the application of the MCR in producing drinking water and also beneficial to the research and application of other coagulation and membrane separation hybrid processes. Copyright © 2016. Published by Elsevier B.V.

  1. Approximate thermochemical tables for some C-H and C-H-O species

    NASA Technical Reports Server (NTRS)

    Bahn, G. S.

    1973-01-01

    Approximate thermochemical tables are presented for some C-H and C-H-O species and for some ionized species, supplementing the JANAF Thermochemical Tables for application to finite-chemical-kinetics calculations. The approximate tables were prepared by interpolation and extrapolation of limited available data, especially by interpolations over chemical families of species. Original estimations have been smoothed by use of a modification for the CDC-6600 computer of the Lewis Research Center PACl Program which was originally prepared for the IBM-7094 computer Summary graphs for various families show reasonably consistent curvefit values, anchored by properties of existing species in the JANAF tables.

  2. Parameter estimation of Monod model by the Least-Squares method for microalgae Botryococcus Braunii sp

    NASA Astrophysics Data System (ADS)

    See, J. J.; Jamaian, S. S.; Salleh, R. M.; Nor, M. E.; Aman, F.

    2018-04-01

    This research aims to estimate the parameters of Monod model of microalgae Botryococcus Braunii sp growth by the Least-Squares method. Monod equation is a non-linear equation which can be transformed into a linear equation form and it is solved by implementing the Least-Squares linear regression method. Meanwhile, Gauss-Newton method is an alternative method to solve the non-linear Least-Squares problem with the aim to obtain the parameters value of Monod model by minimizing the sum of square error ( SSE). As the result, the parameters of the Monod model for microalgae Botryococcus Braunii sp can be estimated by the Least-Squares method. However, the estimated parameters value obtained by the non-linear Least-Squares method are more accurate compared to the linear Least-Squares method since the SSE of the non-linear Least-Squares method is less than the linear Least-Squares method.

  3. Temperature effects on flocculation, using different coagulants.

    PubMed

    Fitzpatrick, C S B; Fradin, E; Gregory, J

    2004-01-01

    Temperature is known to affect flocculation and filter performance. Jar tests have been conducted in the laboratory, using a photometric dispersion analyser (PDA) to assess the effects of temperature on floc formation, breakage and reformation. Alum, ferric sulphate and three polyaluminium chloride (PACI) coagulants have been investigated for temperatures ranging between 6 and 29 degrees C for a suspension of kaolin clay in London tap water. Results confirm that floc formation is slower at lower temperatures for all coagulants. A commercial PACl product, PAX XL 19, produces the largest flocs for all temperatures; and alum the smallest. Increasing the shear rate results in floc breakage in all cases and the flocs never reform to their original size. This effect is most notable for temperatures around 15 degrees C. Breakage, in terms of floc size reduction, is greater for higher temperatures, suggesting a weaker floc. Recovery after increased shear is greater at lower temperatures implying that floc break-up is more reversible for lower temperatures.

  4. Optimal least-squares finite element method for elliptic problems

    NASA Technical Reports Server (NTRS)

    Jiang, Bo-Nan; Povinelli, Louis A.

    1991-01-01

    An optimal least squares finite element method is proposed for two dimensional and three dimensional elliptic problems and its advantages are discussed over the mixed Galerkin method and the usual least squares finite element method. In the usual least squares finite element method, the second order equation (-Delta x (Delta u) + u = f) is recast as a first order system (-Delta x p + u = f, Delta u - p = 0). The error analysis and numerical experiment show that, in this usual least squares finite element method, the rate of convergence for flux p is one order lower than optimal. In order to get an optimal least squares method, the irrotationality Delta x p = 0 should be included in the first order system.

  5. Creating Magic Squares.

    ERIC Educational Resources Information Center

    Lyon, Betty Clayton

    1990-01-01

    One method of making magic squares using a prolongated square is illustrated. Discussed are third-order magic squares, fractional magic squares, fifth-order magic squares, decimal magic squares, and even magic squares. (CW)

  6. Comparison of Response Surface Construction Methods for Derivative Estimation Using Moving Least Squares, Kriging and Radial Basis Functions

    NASA Technical Reports Server (NTRS)

    Krishnamurthy, Thiagarajan

    2005-01-01

    Response construction methods using Moving Least Squares (MLS), Kriging and Radial Basis Functions (RBF) are compared with the Global Least Squares (GLS) method in three numerical examples for derivative generation capability. Also, a new Interpolating Moving Least Squares (IMLS) method adopted from the meshless method is presented. It is found that the response surface construction methods using the Kriging and RBF interpolation yields more accurate results compared with MLS and GLS methods. Several computational aspects of the response surface construction methods also discussed.

  7. Emotion regulation mediates the relationship between personality and sleep quality.

    PubMed

    Vantieghem, Iris; Marcoen, Nele; Mairesse, Olivier; Vandekerckhove, Marie

    2016-09-01

    Despite a long history of interest in personality as well as in the mechanisms that regulate sleep, the relationship between personality and sleep is not yet well understood. The purpose of this study was to explore how personality affects sleep. The present cross-sectional study, based on a sample of 1291 participants with a mean age of 31.16 years (SD = 12.77), investigates the impact of personality styles, assessed by the Personality Adjectives Checklist (PACL), on subjective sleep quality, as well as the possible mediation of this relationship by dispositional emotion regulation (ER) styles. The dispositional use of suppression was a quite consistent predictor of poor subjective sleep quality for individuals scoring high on Confident, Cooperative or Introversive personality traits, but low on Respectful personality traits. Although a positive relationship between reappraisal and subjective sleep quality was found, there was only little evidence for a relationship between the assessed personality styles and the use of cognitive reappraisal. The present results indicate that in the evaluation of subjective sleep, the impact of personality and ER processes, such as emotion suppression, should be taken into account.

  8. Optimization of coagulation-flocculation treatment on paper-recycling wastewater: Application of response surface methodology.

    PubMed

    Birjandi, Noushin; Younesi, Habibollah; Bahramifar, Nader; Ghafari, Shahin; Zinatizadeh, Ali Akbar; Sethupathi, Sumathi

    2013-01-01

    The application of coagulation-flocculation (CF) process for treating the paper-recycling wastewater in jar-test experiment was employed. The purpose of the study was aimed to examine the efficiency of alum and poly aluminum chloride (PACl) in combination with a cationic polyacrylamide (C-PAM) in removal of chemical oxygen demand (COD) and turbidity from paper-recycling wastewater. Optimization of CF process were performed by varying independent parameters (coagulants dosage, flocculants dosage, initial COD and pH) using a central composite design (CCD) under response surface methodology (RSM). Maximum set required 4.5 as pH, 40 mg/L coagulants dosage and 4.5 mg/L flocculants dosage at which gave 92% reduction of turbidity, 97% of COD removal and SVI 80 mL/g. The best coagulant and flocculants were alum and chemfloc 3876 at dose of 41 and 7.52 mg/L, respectively, correspondingly at pH of 6.85. These conditions gave 91.30% COD and 95.82% turbidity removals and 12 mL/g SVI.

  9. A high order compact least-squares reconstructed discontinuous Galerkin method for the steady-state compressible flows on hybrid grids

    NASA Astrophysics Data System (ADS)

    Cheng, Jian; Zhang, Fan; Liu, Tiegang

    2018-06-01

    In this paper, a class of new high order reconstructed DG (rDG) methods based on the compact least-squares (CLS) reconstruction [23,24] is developed for simulating the two dimensional steady-state compressible flows on hybrid grids. The proposed method combines the advantages of the DG discretization with the flexibility of the compact least-squares reconstruction, which exhibits its superior potential in enhancing the level of accuracy and reducing the computational cost compared to the underlying DG methods with respect to the same number of degrees of freedom. To be specific, a third-order compact least-squares rDG(p1p2) method and a fourth-order compact least-squares rDG(p2p3) method are developed and investigated in this work. In this compact least-squares rDG method, the low order degrees of freedom are evolved through the underlying DG(p1) method and DG(p2) method, respectively, while the high order degrees of freedom are reconstructed through the compact least-squares reconstruction, in which the constitutive relations are built by requiring the reconstructed polynomial and its spatial derivatives on the target cell to conserve the cell averages and the corresponding spatial derivatives on the face-neighboring cells. The large sparse linear system resulted by the compact least-squares reconstruction can be solved relatively efficient when it is coupled with the temporal discretization in the steady-state simulations. A number of test cases are presented to assess the performance of the high order compact least-squares rDG methods, which demonstrates their potential to be an alternative approach for the high order numerical simulations of steady-state compressible flows.

  10. Least Squares Procedures.

    ERIC Educational Resources Information Center

    Hester, Yvette

    Least squares methods are sophisticated mathematical curve fitting procedures used in all classical parametric methods. The linear least squares approximation is most often associated with finding the "line of best fit" or the regression line. Since all statistical analyses are correlational and all classical parametric methods are least…

  11. A Note on Magic Squares

    ERIC Educational Resources Information Center

    Williams, Horace E.

    1974-01-01

    A method for generating 3x3 magic squares is developed. A series of questions relating to these magic squares is posed. An invesitgation using matrix methods is suggested with some questions for consideration. (LS)

  12. 40 CFR Appendix C to Subpart Nnn... - Method for the Determination of Product Density

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... insulation. The method is applicable to all cured board and blanket products. 2. Equipment One square foot (12 in. by 12 in.) template, or templates that are multiples of one square foot, for use in cutting... procedure for the designated product. 3.2Cut samples using one square foot (or multiples of one square foot...

  13. 40 CFR Appendix C to Subpart Nnn... - Method for the Determination of Product Density

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    .... The method is applicable to all cured board and blanket products. 2. Equipment One square foot (12 in. by 12 in.) template, or templates that are multiples of one square foot, for use in cutting... procedure for the designated product. 3.2Cut samples using one square foot (or multiples of one square foot...

  14. 40 CFR Appendix C to Subpart Nnn... - Method for the Determination of Product Density

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    .... The method is applicable to all cured board and blanket products. 2. Equipment One square foot (12 in. by 12 in.) template, or templates that are multiples of one square foot, for use in cutting... procedure for the designated product. 3.2Cut samples using one square foot (or multiples of one square foot...

  15. The covariance matrix for the solution vector of an equality-constrained least-squares problem

    NASA Technical Reports Server (NTRS)

    Lawson, C. L.

    1976-01-01

    Methods are given for computing the covariance matrix for the solution vector of an equality-constrained least squares problem. The methods are matched to the solution algorithms given in the book, 'Solving Least Squares Problems.'

  16. Adaptive Modal Identification for Flutter Suppression Control

    NASA Technical Reports Server (NTRS)

    Nguyen, Nhan T.; Drew, Michael; Swei, Sean S.

    2016-01-01

    In this paper, we will develop an adaptive modal identification method for identifying the frequencies and damping of a flutter mode based on model-reference adaptive control (MRAC) and least-squares methods. The least-squares parameter estimation will achieve parameter convergence in the presence of persistent excitation whereas the MRAC parameter estimation does not guarantee parameter convergence. Two adaptive flutter suppression control approaches are developed: one based on MRAC and the other based on the least-squares method. The MRAC flutter suppression control is designed as an integral part of the parameter estimation where the feedback signal is used to estimate the modal information. On the other hand, the separation principle of control and estimation is applied to the least-squares method. The least-squares modal identification is used to perform parameter estimation.

  17. Least squares regression methods for clustered ROC data with discrete covariates.

    PubMed

    Tang, Liansheng Larry; Zhang, Wei; Li, Qizhai; Ye, Xuan; Chan, Leighton

    2016-07-01

    The receiver operating characteristic (ROC) curve is a popular tool to evaluate and compare the accuracy of diagnostic tests to distinguish the diseased group from the nondiseased group when test results from tests are continuous or ordinal. A complicated data setting occurs when multiple tests are measured on abnormal and normal locations from the same subject and the measurements are clustered within the subject. Although least squares regression methods can be used for the estimation of ROC curve from correlated data, how to develop the least squares methods to estimate the ROC curve from the clustered data has not been studied. Also, the statistical properties of the least squares methods under the clustering setting are unknown. In this article, we develop the least squares ROC methods to allow the baseline and link functions to differ, and more importantly, to accommodate clustered data with discrete covariates. The methods can generate smooth ROC curves that satisfy the inherent continuous property of the true underlying curve. The least squares methods are shown to be more efficient than the existing nonparametric ROC methods under appropriate model assumptions in simulation studies. We apply the methods to a real example in the detection of glaucomatous deterioration. We also derive the asymptotic properties of the proposed methods. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  18. Domain Decomposition Algorithms for First-Order System Least Squares Methods

    NASA Technical Reports Server (NTRS)

    Pavarino, Luca F.

    1996-01-01

    Least squares methods based on first-order systems have been recently proposed and analyzed for second-order elliptic equations and systems. They produce symmetric and positive definite discrete systems by using standard finite element spaces, which are not required to satisfy the inf-sup condition. In this paper, several domain decomposition algorithms for these first-order least squares methods are studied. Some representative overlapping and substructuring algorithms are considered in their additive and multiplicative variants. The theoretical and numerical results obtained show that the classical convergence bounds (on the iteration operator) for standard Galerkin discretizations are also valid for least squares methods.

  19. Least-squares collocation meshless approach for radiative heat transfer in absorbing and scattering media

    NASA Astrophysics Data System (ADS)

    Liu, L. H.; Tan, J. Y.

    2007-02-01

    A least-squares collocation meshless method is employed for solving the radiative heat transfer in absorbing, emitting and scattering media. The least-squares collocation meshless method for radiative transfer is based on the discrete ordinates equation. A moving least-squares approximation is applied to construct the trial functions. Except for the collocation points which are used to construct the trial functions, a number of auxiliary points are also adopted to form the total residuals of the problem. The least-squares technique is used to obtain the solution of the problem by minimizing the summation of residuals of all collocation and auxiliary points. Three numerical examples are studied to illustrate the performance of this new solution method. The numerical results are compared with the other benchmark approximate solutions. By comparison, the results show that the least-squares collocation meshless method is efficient, accurate and stable, and can be used for solving the radiative heat transfer in absorbing, emitting and scattering media.

  20. Application of least median of squared orthogonal distance (LMD) and LMD-based reweighted least squares (RLS) methods on the stock-recruitment relationship

    NASA Astrophysics Data System (ADS)

    Wang, Yan-Jun; Liu, Qun

    1999-03-01

    Analysis of stock-recruitment (SR) data is most often done by fitting various SR relationship curves to the data. Fish population dynamics data often have stochastic variations and measurement errors, which usually result in a biased regression analysis. This paper presents a robust regression method, least median of squared orthogonal distance (LMD), which is insensitive to abnormal values in the dependent and independent variables in a regression analysis. Outliers that have significantly different variance from the rest of the data can be identified in a residual analysis. Then, the least squares (LS) method is applied to the SR data with defined outliers being down weighted. The application of LMD and LMD-based Reweighted Least Squares (RLS) method to simulated and real fisheries SR data is explored.

  1. A spectral mimetic least-squares method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bochev, Pavel; Gerritsma, Marc

    We present a spectral mimetic least-squares method for a model diffusion–reaction problem, which preserves key conservation properties of the continuum problem. Casting the model problem into a first-order system for two scalar and two vector variables shifts material properties from the differential equations to a pair of constitutive relations. We also use this system to motivate a new least-squares functional involving all four fields and show that its minimizer satisfies the differential equations exactly. Discretization of the four-field least-squares functional by spectral spaces compatible with the differential operators leads to a least-squares method in which the differential equations are alsomore » satisfied exactly. Additionally, the latter are reduced to purely topological relationships for the degrees of freedom that can be satisfied without reference to basis functions. Furthermore, numerical experiments confirm the spectral accuracy of the method and its local conservation.« less

  2. A spectral mimetic least-squares method

    DOE PAGES

    Bochev, Pavel; Gerritsma, Marc

    2014-09-01

    We present a spectral mimetic least-squares method for a model diffusion–reaction problem, which preserves key conservation properties of the continuum problem. Casting the model problem into a first-order system for two scalar and two vector variables shifts material properties from the differential equations to a pair of constitutive relations. We also use this system to motivate a new least-squares functional involving all four fields and show that its minimizer satisfies the differential equations exactly. Discretization of the four-field least-squares functional by spectral spaces compatible with the differential operators leads to a least-squares method in which the differential equations are alsomore » satisfied exactly. Additionally, the latter are reduced to purely topological relationships for the degrees of freedom that can be satisfied without reference to basis functions. Furthermore, numerical experiments confirm the spectral accuracy of the method and its local conservation.« less

  3. Adaptive slab laser beam quality improvement using a weighted least-squares reconstruction algorithm.

    PubMed

    Chen, Shanqiu; Dong, LiZhi; Chen, XiaoJun; Tan, Yi; Liu, Wenjin; Wang, Shuai; Yang, Ping; Xu, Bing; Ye, YuTang

    2016-04-10

    Adaptive optics is an important technology for improving beam quality in solid-state slab lasers. However, there are uncorrectable aberrations in partial areas of the beam. In the criterion of the conventional least-squares reconstruction method, it makes the zones with small aberrations nonsensitive and hinders this zone from being further corrected. In this paper, a weighted least-squares reconstruction method is proposed to improve the relative sensitivity of zones with small aberrations and to further improve beam quality. Relatively small weights are applied to the zones with large residual aberrations. Comparisons of results show that peak intensity in the far field improved from 1242 analog digital units (ADU) to 2248 ADU, and beam quality β improved from 2.5 to 2.0. This indicates the weighted least-squares method has better performance than the least-squares reconstruction method when there are large zonal uncorrectable aberrations in the slab laser system.

  4. Fitting a function to time-dependent ensemble averaged data.

    PubMed

    Fogelmark, Karl; Lomholt, Michael A; Irbäck, Anders; Ambjörnsson, Tobias

    2018-05-03

    Time-dependent ensemble averages, i.e., trajectory-based averages of some observable, are of importance in many fields of science. A crucial objective when interpreting such data is to fit these averages (for instance, squared displacements) with a function and extract parameters (such as diffusion constants). A commonly overlooked challenge in such function fitting procedures is that fluctuations around mean values, by construction, exhibit temporal correlations. We show that the only available general purpose function fitting methods, correlated chi-square method and the weighted least squares method (which neglects correlation), fail at either robust parameter estimation or accurate error estimation. We remedy this by deriving a new closed-form error estimation formula for weighted least square fitting. The new formula uses the full covariance matrix, i.e., rigorously includes temporal correlations, but is free of the robustness issues, inherent to the correlated chi-square method. We demonstrate its accuracy in four examples of importance in many fields: Brownian motion, damped harmonic oscillation, fractional Brownian motion and continuous time random walks. We also successfully apply our method, weighted least squares including correlation in error estimation (WLS-ICE), to particle tracking data. The WLS-ICE method is applicable to arbitrary fit functions, and we provide a publically available WLS-ICE software.

  5. A component prediction method for flue gas of natural gas combustion based on nonlinear partial least squares method.

    PubMed

    Cao, Hui; Yan, Xingyu; Li, Yaojiang; Wang, Yanxia; Zhou, Yan; Yang, Sanchun

    2014-01-01

    Quantitative analysis for the flue gas of natural gas-fired generator is significant for energy conservation and emission reduction. The traditional partial least squares method may not deal with the nonlinear problems effectively. In the paper, a nonlinear partial least squares method with extended input based on radial basis function neural network (RBFNN) is used for components prediction of flue gas. For the proposed method, the original independent input matrix is the input of RBFNN and the outputs of hidden layer nodes of RBFNN are the extension term of the original independent input matrix. Then, the partial least squares regression is performed on the extended input matrix and the output matrix to establish the components prediction model of flue gas. A near-infrared spectral dataset of flue gas of natural gas combustion is used for estimating the effectiveness of the proposed method compared with PLS. The experiments results show that the root-mean-square errors of prediction values of the proposed method for methane, carbon monoxide, and carbon dioxide are, respectively, reduced by 4.74%, 21.76%, and 5.32% compared to those of PLS. Hence, the proposed method has higher predictive capabilities and better robustness.

  6. Coherent Anomaly Method Calculation on the Cluster Variation Method. II.

    NASA Astrophysics Data System (ADS)

    Wada, Koh; Watanabe, Naotosi; Uchida, Tetsuya

    The critical exponents of the bond percolation model are calculated in the D(= 2,3,…)-dimensional simple cubic lattice on the basis of Suzuki's coherent anomaly method (CAM) by making use of a series of the pair, the square-cactus and the square approximations of the cluster variation method (CVM) in the s-state Potts model. These simple approximations give reasonable values of critical exponents α, β, γ and ν in comparison with ones estimated by other methods. It is also shown that the results of the pair and the square-cactus approximations can be derived as exact results of the bond percolation model on the Bethe and the square-cactus lattice, respectively, in the presence of ghost field without recourse to the s→1 limit of the s-state Potts model.

  7. Comparing least-squares and quantile regression approaches to analyzing median hospital charges.

    PubMed

    Olsen, Cody S; Clark, Amy E; Thomas, Andrea M; Cook, Lawrence J

    2012-07-01

    Emergency department (ED) and hospital charges obtained from administrative data sets are useful descriptors of injury severity and the burden to EDs and the health care system. However, charges are typically positively skewed due to costly procedures, long hospital stays, and complicated or prolonged treatment for few patients. The median is not affected by extreme observations and is useful in describing and comparing distributions of hospital charges. A least-squares analysis employing a log transformation is one approach for estimating median hospital charges, corresponding confidence intervals (CIs), and differences between groups; however, this method requires certain distributional properties. An alternate method is quantile regression, which allows estimation and inference related to the median without making distributional assumptions. The objective was to compare the log-transformation least-squares method to the quantile regression approach for estimating median hospital charges, differences in median charges between groups, and associated CIs. The authors performed simulations using repeated sampling of observed statewide ED and hospital charges and charges randomly generated from a hypothetical lognormal distribution. The median and 95% CI and the multiplicative difference between the median charges of two groups were estimated using both least-squares and quantile regression methods. Performance of the two methods was evaluated. In contrast to least squares, quantile regression produced estimates that were unbiased and had smaller mean square errors in simulations of observed ED and hospital charges. Both methods performed well in simulations of hypothetical charges that met least-squares method assumptions. When the data did not follow the assumed distribution, least-squares estimates were often biased, and the associated CIs had lower than expected coverage as sample size increased. Quantile regression analyses of hospital charges provide unbiased estimates even when lognormal and equal variance assumptions are violated. These methods may be particularly useful in describing and analyzing hospital charges from administrative data sets. © 2012 by the Society for Academic Emergency Medicine.

  8. Ordinary Least Squares and Quantile Regression: An Inquiry-Based Learning Approach to a Comparison of Regression Methods

    ERIC Educational Resources Information Center

    Helmreich, James E.; Krog, K. Peter

    2018-01-01

    We present a short, inquiry-based learning course on concepts and methods underlying ordinary least squares (OLS), least absolute deviation (LAD), and quantile regression (QR). Students investigate squared, absolute, and weighted absolute distance functions (metrics) as location measures. Using differential calculus and properties of convex…

  9. A method for simultaneously counterbalancing condition order and assignment of stimulus materials to conditions.

    PubMed

    Zeelenberg, René; Pecher, Diane

    2015-03-01

    Counterbalanced designs are frequently used in the behavioral sciences. Studies often counterbalance either the order in which conditions are presented in the experiment or the assignment of stimulus materials to conditions. Occasionally, researchers need to simultaneously counterbalance both condition order and stimulus assignment to conditions. Lewis (1989; Behavior Research Methods, Instruments, & Computers 25:414-415, 1993) presented a method for constructing Latin squares that fulfill these requirements. The resulting Latin squares counterbalance immediate sequential effects, but not remote sequential effects. Here, we present a new method for generating Latin squares that simultaneously counterbalance both immediate and remote sequential effects and assignment of stimuli to conditions. An Appendix is provided to facilitate implementation of these Latin square designs.

  10. Simultaneous quantitative analysis of olmesartan, amlodipine and hydrochlorothiazide in their combined dosage form utilizing classical and alternating least squares based chemometric methods.

    PubMed

    Darwish, Hany W; Bakheit, Ahmed H; Abdelhameed, Ali S

    2016-03-01

    Simultaneous spectrophotometric analysis of a multi-component dosage form of olmesartan, amlodipine and hydrochlorothiazide used for the treatment of hypertension has been carried out using various chemometric methods. Multivariate calibration methods include classical least squares (CLS) executed by net analyte processing (NAP-CLS), orthogonal signal correction (OSC-CLS) and direct orthogonal signal correction (DOSC-CLS) in addition to multivariate curve resolution-alternating least squares (MCR-ALS). Results demonstrated the efficiency of the proposed methods as quantitative tools of analysis as well as their qualitative capability. The three analytes were determined precisely using the aforementioned methods in an external data set and in a dosage form after optimization of experimental conditions. Finally, the efficiency of the models was validated via comparison with the partial least squares (PLS) method in terms of accuracy and precision.

  11. Coherent Anomaly Method Calculation on the Cluster Variation Method. II. Critical Exponents of Bond Percolation Model

    NASA Astrophysics Data System (ADS)

    Wada, Koh; Watanabe, Naotosi; Uchida, Tetsuya

    1991-10-01

    The critical exponents of the bond percolation model are calculated in the D(=2, 3, \\cdots)-dimensional simple cubic lattice on the basis of Suzuki’s coherent anomaly method (CAM) by making use of a series of the pair, the square-cactus and the square approximations of the cluster variation method (CVM) in the s-state Potts model. These simple approximations give reasonable values of critical exponents α, β, γ and ν in comparison with ones estimated by other methods. It is also shown that the results of the pair and the square-cactus approximations can be derived as exact results of the bond percolation model on the Bethe and the square-cactus lattice, respectively, in the presence of ghost field without recourse to the s→1 limit of the s-state Potts model.

  12. [The research on separating and extracting overlapping spectral feature lines in LIBS using damped least squares method].

    PubMed

    Wang, Yin; Zhao, Nan-jing; Liu, Wen-qing; Yu, Yang; Fang, Li; Meng, De-shuo; Hu, Li; Zhang, Da-hai; Ma, Min-jun; Xiao, Xue; Wang, Yu; Liu, Jian-guo

    2015-02-01

    In recent years, the technology of laser induced breakdown spectroscopy has been developed rapidly. As one kind of new material composition detection technology, laser induced breakdown spectroscopy can simultaneously detect multi elements fast and simply without any complex sample preparation and realize field, in-situ material composition detection of the sample to be tested. This kind of technology is very promising in many fields. It is very important to separate, fit and extract spectral feature lines in laser induced breakdown spectroscopy, which is the cornerstone of spectral feature recognition and subsequent elements concentrations inversion research. In order to realize effective separation, fitting and extraction of spectral feature lines in laser induced breakdown spectroscopy, the original parameters for spectral lines fitting before iteration were analyzed and determined. The spectral feature line of' chromium (Cr I : 427.480 nm) in fly ash gathered from a coal-fired power station, which was overlapped with another line(FeI: 427.176 nm), was separated from the other one and extracted by using damped least squares method. Based on Gauss-Newton iteration, damped least squares method adds damping factor to step and adjust step length dynamically according to the feedback information after each iteration, in order to prevent the iteration from diverging and make sure that the iteration could converge fast. Damped least squares method helps to obtain better results of separating, fitting and extracting spectral feature lines and give more accurate intensity values of these spectral feature lines: The spectral feature lines of chromium in samples which contain different concentrations of chromium were separated and extracted. And then, the intensity values of corresponding spectral lines were given by using damped least squares method and least squares method separately. The calibration curves were plotted, which showed the relationship between spectral line intensity values and chromium concentrations in different samples. And then their respective linear correlations were compared. The experimental results showed that the linear correlation of the intensity values of spectral feature lines and the concentrations of chromium in different samples, which was obtained by damped least squares method, was better than that one obtained by least squares method. And therefore, damped least squares method was stable, reliable and suitable for separating, fitting and extracting spectral feature lines in laser induced breakdown spectroscopy.

  13. Three-dimensional Reconstruction of Scar Contracture-bearing Axilla and Digital Webs Using the Square Flap Method

    PubMed Central

    Huang, Chenyu

    2014-01-01

    Background: Joint scar contractures are characterized by tight bands of soft tissue that bridge the 2 ends of the joint like a web. Classical treatment methods such as Z-plasties are mainly based on 2-dimensional designs. Our square flap method is an alternative surgical method that restores the span of the web in a stereometric fashion, thereby reconstructing joint function. Methods: In total, 20 Japanese patients with joint scar contractures on the axillary (n = 10) or first digital web (n = 10) underwent square flap surgery. The maximum range of motion and commissure length were measured before and after surgery. A theoretical stereometric geometrical model of the square flap was established to compare it to the classical single (60 degree), 4-flap (45 degree), and 5-flap (60 degree) Z-plasties in terms of theoretical web reconstruction efficacy. Results: All cases achieved 100% contracture release. The maximum range of motion and web space improved after square flap surgery (P = 0.001). Stereometric geometrical modeling revealed that the standard square flap (α = 45 degree; β = 90 degree) yields a larger flap area, length/width ratio, and postsurgical commissure length than the Z-plasties. It can also be adapted by varying angles α and β, although certain angle thresholds must be met to obtain the stereometric advantages of this method. Conclusions: When used to treat joint scar contractures, the square flap method can fully span the web space in a stereometric manner, thus yielding a close-to-original shape and function. Compared with the classical Z-plasties, it also provides sufficient anatomical blood supply while imposing the least physiological tension on the adjacent skin. PMID:25289342

  14. Analysis of stability for stochastic delay integro-differential equations.

    PubMed

    Zhang, Yu; Li, Longsuo

    2018-01-01

    In this paper, we concern stability of numerical methods applied to stochastic delay integro-differential equations. For linear stochastic delay integro-differential equations, it is shown that the mean-square stability is derived by the split-step backward Euler method without any restriction on step-size, while the Euler-Maruyama method could reproduce the mean-square stability under a step-size constraint. We also confirm the mean-square stability of the split-step backward Euler method for nonlinear stochastic delay integro-differential equations. The numerical experiments further verify the theoretical results.

  15. The Use of Alternative Regression Methods in Social Sciences and the Comparison of Least Squares and M Estimation Methods in Terms of the Determination of Coefficient

    ERIC Educational Resources Information Center

    Coskuntuncel, Orkun

    2013-01-01

    The purpose of this study is two-fold; the first aim being to show the effect of outliers on the widely used least squares regression estimator in social sciences. The second aim is to compare the classical method of least squares with the robust M-estimator using the "determination of coefficient" (R[superscript 2]). For this purpose,…

  16. Simplified Least Squares Shadowing sensitivity analysis for chaotic ODEs and PDEs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chater, Mario, E-mail: chaterm@mit.edu; Ni, Angxiu, E-mail: niangxiu@mit.edu; Wang, Qiqi, E-mail: qiqi@mit.edu

    This paper develops a variant of the Least Squares Shadowing (LSS) method, which has successfully computed the derivative for several chaotic ODEs and PDEs. The development in this paper aims to simplify Least Squares Shadowing method by improving how time dilation is treated. Instead of adding an explicit time dilation term as in the original method, the new variant uses windowing, which can be more efficient and simpler to implement, especially for PDEs.

  17. New method to incorporate Type B uncertainty into least-squares procedures in radionuclide metrology.

    PubMed

    Han, Jubong; Lee, K B; Lee, Jong-Man; Park, Tae Soon; Oh, J S; Oh, Pil-Jei

    2016-03-01

    We discuss a new method to incorporate Type B uncertainty into least-squares procedures. The new method is based on an extension of the likelihood function from which a conventional least-squares function is derived. The extended likelihood function is the product of the original likelihood function with additional PDFs (Probability Density Functions) that characterize the Type B uncertainties. The PDFs are considered to describe one's incomplete knowledge on correction factors being called nuisance parameters. We use the extended likelihood function to make point and interval estimations of parameters in the basically same way as the least-squares function used in the conventional least-squares method is derived. Since the nuisance parameters are not of interest and should be prevented from appearing in the final result, we eliminate such nuisance parameters by using the profile likelihood. As an example, we present a case study for a linear regression analysis with a common component of Type B uncertainty. In this example we compare the analysis results obtained from using our procedure with those from conventional methods. Copyright © 2015. Published by Elsevier Ltd.

  18. An algorithm for propagating the square-root covariance matrix in triangular form

    NASA Technical Reports Server (NTRS)

    Tapley, B. D.; Choe, C. Y.

    1976-01-01

    A method for propagating the square root of the state error covariance matrix in lower triangular form is described. The algorithm can be combined with any triangular square-root measurement update algorithm to obtain a triangular square-root sequential estimation algorithm. The triangular square-root algorithm compares favorably with the conventional sequential estimation algorithm with regard to computation time.

  19. A simple method for processing data with least square method

    NASA Astrophysics Data System (ADS)

    Wang, Chunyan; Qi, Liqun; Chen, Yongxiang; Pang, Guangning

    2017-08-01

    The least square method is widely used in data processing and error estimation. The mathematical method has become an essential technique for parameter estimation, data processing, regression analysis and experimental data fitting, and has become a criterion tool for statistical inference. In measurement data analysis, the distribution of complex rules is usually based on the least square principle, i.e., the use of matrix to solve the final estimate and to improve its accuracy. In this paper, a new method is presented for the solution of the method which is based on algebraic computation and is relatively straightforward and easy to understand. The practicability of this method is described by a concrete example.

  20. Avian leucocyte counting using the hemocytometer

    USGS Publications Warehouse

    Dein, F.J.; Wilson, A.; Fischer, D.; Langenberg, P.

    1994-01-01

    Automated methods for counting leucocytes in avian blood are not available because of the presence of nucleated erythrocytes and thrombocytes. Therefore, total white blood cell counts are performed by hand using a hemocytometer. The Natt and Herrick and the Unopette methods are the most common stain and diluent preparations for this procedure. Replicate hemocytometer counts using these two methods were performed on blood from four birds of different species. Cells present in each square of the hemocytometer were counted. Counting cells in the corner, side, or center hemocytometer squares produced statistically equivalent results; counting four squares per chamber provided a result similar to that obtained by counting nine squares; and the Unopette method was more precise for hemocytometer counting than was the Natt and Herrick method. The Unopette method is easier to learn and perform but is an indirect process, utilizing the differential count from a stained smear. The Natt and Herrick method is a direct total count, but cell identification is more difficult.

  1. A Two-Layer Least Squares Support Vector Machine Approach to Credit Risk Assessment

    NASA Astrophysics Data System (ADS)

    Liu, Jingli; Li, Jianping; Xu, Weixuan; Shi, Yong

    Least squares support vector machine (LS-SVM) is a revised version of support vector machine (SVM) and has been proved to be a useful tool for pattern recognition. LS-SVM had excellent generalization performance and low computational cost. In this paper, we propose a new method called two-layer least squares support vector machine which combines kernel principle component analysis (KPCA) and linear programming form of least square support vector machine. With this method sparseness and robustness is obtained while solving large dimensional and large scale database. A U.S. commercial credit card database is used to test the efficiency of our method and the result proved to be a satisfactory one.

  2. New method for propagating the square root covariance matrix in triangular form. [using Kalman-Bucy filter

    NASA Technical Reports Server (NTRS)

    Choe, C. Y.; Tapley, B. D.

    1975-01-01

    A method proposed by Potter of applying the Kalman-Bucy filter to the problem of estimating the state of a dynamic system is described, in which the square root of the state error covariance matrix is used to process the observations. A new technique which propagates the covariance square root matrix in lower triangular form is given for the discrete observation case. The technique is faster than previously proposed algorithms and is well-adapted for use with the Carlson square root measurement algorithm.

  3. An error analysis of least-squares finite element method of velocity-pressure-vorticity formulation for Stokes problem

    NASA Technical Reports Server (NTRS)

    Chang, Ching L.; Jiang, Bo-Nan

    1990-01-01

    A theoretical proof of the optimal rate of convergence for the least-squares method is developed for the Stokes problem based on the velocity-pressure-vorticity formula. The 2D Stokes problem is analyzed to define the product space and its inner product, and the a priori estimates are derived to give the finite-element approximation. The least-squares method is found to converge at the optimal rate for equal-order interpolation.

  4. Two-dimensional wavefront reconstruction based on double-shearing and least squares fitting

    NASA Astrophysics Data System (ADS)

    Liang, Peiying; Ding, Jianping; Zhu, Yangqing; Dong, Qian; Huang, Yuhua; Zhu, Zhen

    2017-06-01

    The two-dimensional wavefront reconstruction method based on double-shearing and least squares fitting is proposed in this paper. Four one-dimensional phase estimates of the measured wavefront, which correspond to the two shears and the two orthogonal directions, could be calculated from the differential phase, which solves the problem of the missing spectrum, and then by using the least squares method the two-dimensional wavefront reconstruction could be done. The numerical simulations of the proposed algorithm are carried out to verify the feasibility of this method. The influence of noise generated from different shear amount and different intensity on the accuracy of the reconstruction is studied and compared with the results from the algorithm based on single-shearing and least squares fitting. Finally, a two-grating lateral shearing interference experiment is carried out to verify the wavefront reconstruction algorithm based on doubleshearing and least squares fitting.

  5. A study of autonomous satellite navigation methods using the global positioning satellite system

    NASA Technical Reports Server (NTRS)

    Tapley, B. D.

    1980-01-01

    Special orbit determination algorithms were developed to accommodate the size and speed limitations of on-board computer systems of the NAVSTAR Global Positioning System. The algorithms use square root sequential filtering methods. A new method for the time update of the square root covariance matrix was also developed. In addition, the time update method was compared with another square root convariance propagation method to determine relative performance characteristics. Comparisions were based on the results of computer simulations of the LANDSAT-D satellite processing pseudo range and pseudo range-rate measurements from the phase one GPS. A summary of the comparison results is presented.

  6. Least Squares Moving-Window Spectral Analysis.

    PubMed

    Lee, Young Jong

    2017-08-01

    Least squares regression is proposed as a moving-windows method for analysis of a series of spectra acquired as a function of external perturbation. The least squares moving-window (LSMW) method can be considered an extended form of the Savitzky-Golay differentiation for nonuniform perturbation spacing. LSMW is characterized in terms of moving-window size, perturbation spacing type, and intensity noise. Simulation results from LSMW are compared with results from other numerical differentiation methods, such as single-interval differentiation, autocorrelation moving-window, and perturbation correlation moving-window methods. It is demonstrated that this simple LSMW method can be useful for quantitative analysis of nonuniformly spaced spectral data with high frequency noise.

  7. Three-dimensional Reconstruction of Scar Contracture-bearing Axilla and Digital Webs Using the Square Flap Method.

    PubMed

    Huang, Chenyu; Ogawa, Rei

    2014-05-01

    Joint scar contractures are characterized by tight bands of soft tissue that bridge the 2 ends of the joint like a web. Classical treatment methods such as Z-plasties are mainly based on 2-dimensional designs. Our square flap method is an alternative surgical method that restores the span of the web in a stereometric fashion, thereby reconstructing joint function. In total, 20 Japanese patients with joint scar contractures on the axillary (n = 10) or first digital web (n = 10) underwent square flap surgery. The maximum range of motion and commissure length were measured before and after surgery. A theoretical stereometric geometrical model of the square flap was established to compare it to the classical single (60 degree), 4-flap (45 degree), and 5-flap (60 degree) Z-plasties in terms of theoretical web reconstruction efficacy. All cases achieved 100% contracture release. The maximum range of motion and web space improved after square flap surgery (P = 0.001). Stereometric geometrical modeling revealed that the standard square flap (α = 45 degree; β = 90 degree) yields a larger flap area, length/width ratio, and postsurgical commissure length than the Z-plasties. It can also be adapted by varying angles α and β, although certain angle thresholds must be met to obtain the stereometric advantages of this method. When used to treat joint scar contractures, the square flap method can fully span the web space in a stereometric manner, thus yielding a close-to-original shape and function. Compared with the classical Z-plasties, it also provides sufficient anatomical blood supply while imposing the least physiological tension on the adjacent skin.

  8. Simple and Reliable Determination of Intravoxel Incoherent Motion Parameters for the Differential Diagnosis of Head and Neck Tumors

    PubMed Central

    Sasaki, Miho; Sumi, Misa; Eida, Sato; Katayama, Ikuo; Hotokezaka, Yuka; Nakamura, Takashi

    2014-01-01

    Intravoxel incoherent motion (IVIM) imaging can characterize diffusion and perfusion of normal and diseased tissues, and IVIM parameters are authentically determined by using cumbersome least-squares method. We evaluated a simple technique for the determination of IVIM parameters using geometric analysis of the multiexponential signal decay curve as an alternative to the least-squares method for the diagnosis of head and neck tumors. Pure diffusion coefficients (D), microvascular volume fraction (f), perfusion-related incoherent microcirculation (D*), and perfusion parameter that is heavily weighted towards extravascular space (P) were determined geometrically (Geo D, Geo f, and Geo P) or by least-squares method (Fit D, Fit f, and Fit D*) in normal structures and 105 head and neck tumors. The IVIM parameters were compared for their levels and diagnostic abilities between the 2 techniques. The IVIM parameters were not able to determine in 14 tumors with the least-squares method alone and in 4 tumors with the geometric and least-squares methods. The geometric IVIM values were significantly different (p<0.001) from Fit values (+2±4% and −7±24% for D and f values, respectively). Geo D and Fit D differentiated between lymphomas and SCCs with similar efficacy (78% and 80% accuracy, respectively). Stepwise approaches using combinations of Geo D and Geo P, Geo D and Geo f, or Fit D and Fit D* differentiated between pleomorphic adenomas, Warthin tumors, and malignant salivary gland tumors with the same efficacy (91% accuracy = 21/23). However, a stepwise differentiation using Fit D and Fit f was less effective (83% accuracy = 19/23). Considering cumbersome procedures with the least squares method compared with the geometric method, we concluded that the geometric determination of IVIM parameters can be an alternative to least-squares method in the diagnosis of head and neck tumors. PMID:25402436

  9. Generalizations of Tikhonov's regularized method of least squares to non-Euclidean vector norms

    NASA Astrophysics Data System (ADS)

    Volkov, V. V.; Erokhin, V. I.; Kakaev, V. V.; Onufrei, A. Yu.

    2017-09-01

    Tikhonov's regularized method of least squares and its generalizations to non-Euclidean norms, including polyhedral, are considered. The regularized method of least squares is reduced to mathematical programming problems obtained by "instrumental" generalizations of the Tikhonov lemma on the minimal (in a certain norm) solution of a system of linear algebraic equations with respect to an unknown matrix. Further studies are needed for problems concerning the development of methods and algorithms for solving reduced mathematical programming problems in which the objective functions and admissible domains are constructed using polyhedral vector norms.

  10. Error analysis on squareness of multi-sensor integrated CMM for the multistep registration method

    NASA Astrophysics Data System (ADS)

    Zhao, Yan; Wang, Yiwen; Ye, Xiuling; Wang, Zhong; Fu, Luhua

    2018-01-01

    The multistep registration(MSR) method in [1] is to register two different classes of sensors deployed on z-arm of CMM(coordinate measuring machine): a video camera and a tactile probe sensor. In general, it is difficult to obtain a very precise registration result with a single common standard, instead, this method is achieved by measuring two different standards with a constant distance between them two which are fixed on a steel plate. Although many factors have been considered such as the measuring ability of sensors, the uncertainty of the machine and the number of data pairs, there is no exact analysis on the squareness between the x-axis and the y-axis on the xy plane. For this sake, error analysis on the squareness of multi-sensor integrated CMM for the multistep registration method will be made to examine the validation of the MSR method. Synthetic experiments on the squareness on the xy plane for the simplified MSR with an inclination rotation are simulated, which will lead to a regular result. Experiments have been carried out with the multi-standard device designed also in [1], meanwhile, inspections with the help of a laser interferometer on the xy plane have been carried out. The final results are conformed to the simulations, and the squareness errors of the MSR method are also similar to the results of interferometer. In other word, the MSR can also adopted/utilized to verify the squareness of a CMM.

  11. Latin and Magic Squares

    ERIC Educational Resources Information Center

    Emanouilidis, Emanuel

    2005-01-01

    Latin squares have existed for hundreds of years but it wasn't until rather recently that Latin squares were used in other areas such as statistics, graph theory, coding theory and the generation of random numbers as well as in the design and analysis of experiments. This note describes Latin and diagonal Latin squares, a method of constructing…

  12. Uncertainty based pressure reconstruction from velocity measurement with generalized least squares

    NASA Astrophysics Data System (ADS)

    Zhang, Jiacheng; Scalo, Carlo; Vlachos, Pavlos

    2017-11-01

    A method using generalized least squares reconstruction of instantaneous pressure field from velocity measurement and velocity uncertainty is introduced and applied to both planar and volumetric flow data. Pressure gradients are computed on a staggered grid from flow acceleration. The variance-covariance matrix of the pressure gradients is evaluated from the velocity uncertainty by approximating the pressure gradient error to a linear combination of velocity errors. An overdetermined system of linear equations which relates the pressure and the computed pressure gradients is formulated and then solved using generalized least squares with the variance-covariance matrix of the pressure gradients. By comparing the reconstructed pressure field against other methods such as solving the pressure Poisson equation, the omni-directional integration, and the ordinary least squares reconstruction, generalized least squares method is found to be more robust to the noise in velocity measurement. The improvement on pressure result becomes more remarkable when the velocity measurement becomes less accurate and more heteroscedastic. The uncertainty of the reconstructed pressure field is also quantified and compared across the different methods.

  13. Analysis of Nonlinear Dynamics by Square Matrix Method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu, Li Hua

    The nonlinear dynamics of a system with periodic structure can be analyzed using a square matrix. In this paper, we show that because the special property of the square matrix constructed for nonlinear dynamics, we can reduce the dimension of the matrix from the original large number for high order calculation to low dimension in the first step of the analysis. Then a stable Jordan decomposition is obtained with much lower dimension. The transformation to Jordan form provides an excellent action-angle approximation to the solution of the nonlinear dynamics, in good agreement with trajectories and tune obtained from tracking. Andmore » more importantly, the deviation from constancy of the new action-angle variable provides a measure of the stability of the phase space trajectories and their tunes. Thus the square matrix provides a novel method to optimize the nonlinear dynamic system. The method is illustrated by many examples of comparison between theory and numerical simulation. Finally, in particular, we show that the square matrix method can be used for optimization to reduce the nonlinearity of a system.« less

  14. Understanding Least Squares through Monte Carlo Calculations

    ERIC Educational Resources Information Center

    Tellinghuisen, Joel

    2005-01-01

    The method of least squares (LS) is considered as an important data analysis tool available to physical scientists. The mathematics of linear least squares(LLS) is summarized in a very compact matrix rotation that renders it practically "formulaic".

  15. Landsat-4 (TDRSS-user) orbit determination using batch least-squares and sequential methods

    NASA Technical Reports Server (NTRS)

    Oza, D. H.; Jones, T. L.; Hakimi, M.; Samii, M. V.; Doll, C. E.; Mistretta, G. D.; Hart, R. C.

    1992-01-01

    TDRSS user orbit determination is analyzed using a batch least-squares method and a sequential estimation method. It was found that in the batch least-squares method analysis, the orbit determination consistency for Landsat-4, which was heavily tracked by TDRSS during January 1991, was about 4 meters in the rms overlap comparisons and about 6 meters in the maximum position differences in overlap comparisons. The consistency was about 10 to 30 meters in the 3 sigma state error covariance function in the sequential method analysis. As a measure of consistency, the first residual of each pass was within the 3 sigma bound in the residual space.

  16. Total variation superiorized conjugate gradient method for image reconstruction

    NASA Astrophysics Data System (ADS)

    Zibetti, Marcelo V. W.; Lin, Chuan; Herman, Gabor T.

    2018-03-01

    The conjugate gradient (CG) method is commonly used for the relatively-rapid solution of least squares problems. In image reconstruction, the problem can be ill-posed and also contaminated by noise; due to this, approaches such as regularization should be utilized. Total variation (TV) is a useful regularization penalty, frequently utilized in image reconstruction for generating images with sharp edges. When a non-quadratic norm is selected for regularization, as is the case for TV, then it is no longer possible to use CG. Non-linear CG is an alternative, but it does not share the efficiency that CG shows with least squares and methods such as fast iterative shrinkage-thresholding algorithms (FISTA) are preferred for problems with TV norm. A different approach to including prior information is superiorization. In this paper it is shown that the conjugate gradient method can be superiorized. Five different CG variants are proposed, including preconditioned CG. The CG methods superiorized by the total variation norm are presented and their performance in image reconstruction is demonstrated. It is illustrated that some of the proposed variants of the superiorized CG method can produce reconstructions of superior quality to those produced by FISTA and in less computational time, due to the speed of the original CG for least squares problems. In the Appendix we examine the behavior of one of the superiorized CG methods (we call it S-CG); one of its input parameters is a positive number ɛ. It is proved that, for any given ɛ that is greater than the half-squared-residual for the least squares solution, S-CG terminates in a finite number of steps with an output for which the half-squared-residual is less than or equal to ɛ. Importantly, it is also the case that the output will have a lower value of TV than what would be provided by unsuperiorized CG for the same value ɛ of the half-squared residual.

  17. Removal of model viruses, E. coli and Cryptosporidium oocysts from surface water by zirconium and chitosan coagulants.

    PubMed

    Christensen, Ekaterina; Nilsen, Vegard; Håkonsen, Tor; Heistad, Arve; Gantzer, Christophe; Robertson, Lucy J; Myrmel, Mette

    2017-10-01

    The present work evaluates the effect of contact filtration, preceded by coagulation with zirconium (Zr) and chitosan coagulants, on model microorganisms and waterborne pathogens. River water intended for potable water production was spiked with MS2 and Salmonella Typhimurium 28B bacteriophages, Escherichia coli, and Cryptosporidium parvum oocysts prior to coagulation. The hygienic performance demonstrated by Zr comprised 3.0-4.0 log 10 removal of viruses and 5.0-6.0 log 10 removal of E. coli and C. parvum oocysts. Treatment with chitosan resulted in a removal of 2.5-3.0 log 10 of viruses and parasites, and 4.5-5.0 log 10 of bacteria. A reference coagulant, polyaluminium chloride (PACl), gave a 2.5-3.0 log 10 removal of viruses and 4.5 log 10 of E. coli. These results indicate that both Zr and chitosan enable adequate removal of microorganisms from surface water. The present study also attempts to assess removal rates of the selected microorganisms with regard to their size and surface properties. The isoelectric point of the Salmonella Typhimurium 28B bacteriophage is reported for the first time. The retention of the selected microorganisms in the filter bed appeared to have some correlation with their size, but the effect of the charge remained unclear.

  18. Using Least Squares for Error Propagation

    ERIC Educational Resources Information Center

    Tellinghuisen, Joel

    2015-01-01

    The method of least-squares (LS) has a built-in procedure for estimating the standard errors (SEs) of the adjustable parameters in the fit model: They are the square roots of the diagonal elements of the covariance matrix. This means that one can use least-squares to obtain numerical values of propagated errors by defining the target quantities as…

  19. a Unified Matrix Polynomial Approach to Modal Identification

    NASA Astrophysics Data System (ADS)

    Allemang, R. J.; Brown, D. L.

    1998-04-01

    One important current focus of modal identification is a reformulation of modal parameter estimation algorithms into a single, consistent mathematical formulation with a corresponding set of definitions and unifying concepts. Particularly, a matrix polynomial approach is used to unify the presentation with respect to current algorithms such as the least-squares complex exponential (LSCE), the polyreference time domain (PTD), Ibrahim time domain (ITD), eigensystem realization algorithm (ERA), rational fraction polynomial (RFP), polyreference frequency domain (PFD) and the complex mode indication function (CMIF) methods. Using this unified matrix polynomial approach (UMPA) allows a discussion of the similarities and differences of the commonly used methods. the use of least squares (LS), total least squares (TLS), double least squares (DLS) and singular value decomposition (SVD) methods is discussed in order to take advantage of redundant measurement data. Eigenvalue and SVD transformation methods are utilized to reduce the effective size of the resulting eigenvalue-eigenvector problem as well.

  20. Analysis and computation of a least-squares method for consistent mesh tying

    DOE PAGES

    Day, David; Bochev, Pavel

    2007-07-10

    We report in the finite element method, a standard approach to mesh tying is to apply Lagrange multipliers. If the interface is curved, however, discretization generally leads to adjoining surfaces that do not coincide spatially. Straightforward Lagrange multiplier methods lead to discrete formulations failing a first-order patch test [T.A. Laursen, M.W. Heinstein, Consistent mesh-tying methods for topologically distinct discretized surfaces in non-linear solid mechanics, Internat. J. Numer. Methods Eng. 57 (2003) 1197–1242]. This paper presents a theoretical and computational study of a least-squares method for mesh tying [P. Bochev, D.M. Day, A least-squares method for consistent mesh tying, Internat. J.more » Numer. Anal. Modeling 4 (2007) 342–352], applied to the partial differential equation -∇ 2φ+αφ=f. We prove optimal convergence rates for domains represented as overlapping subdomains and show that the least-squares method passes a patch test of the order of the finite element space by construction. To apply the method to subdomain configurations with gaps and overlaps we use interface perturbations to eliminate the gaps. Finally, theoretical error estimates are illustrated by numerical experiments.« less

  1. Missing value imputation in DNA microarrays based on conjugate gradient method.

    PubMed

    Dorri, Fatemeh; Azmi, Paeiz; Dorri, Faezeh

    2012-02-01

    Analysis of gene expression profiles needs a complete matrix of gene array values; consequently, imputation methods have been suggested. In this paper, an algorithm that is based on conjugate gradient (CG) method is proposed to estimate missing values. k-nearest neighbors of the missed entry are first selected based on absolute values of their Pearson correlation coefficient. Then a subset of genes among the k-nearest neighbors is labeled as the best similar ones. CG algorithm with this subset as its input is then used to estimate the missing values. Our proposed CG based algorithm (CGimpute) is evaluated on different data sets. The results are compared with sequential local least squares (SLLSimpute), Bayesian principle component analysis (BPCAimpute), local least squares imputation (LLSimpute), iterated local least squares imputation (ILLSimpute) and adaptive k-nearest neighbors imputation (KNNKimpute) methods. The average of normalized root mean squares error (NRMSE) and relative NRMSE in different data sets with various missing rates shows CGimpute outperforms other methods. Copyright © 2011 Elsevier Ltd. All rights reserved.

  2. Penalized Nonlinear Least Squares Estimation of Time-Varying Parameters in Ordinary Differential Equations

    PubMed Central

    Cao, Jiguo; Huang, Jianhua Z.; Wu, Hulin

    2012-01-01

    Ordinary differential equations (ODEs) are widely used in biomedical research and other scientific areas to model complex dynamic systems. It is an important statistical problem to estimate parameters in ODEs from noisy observations. In this article we propose a method for estimating the time-varying coefficients in an ODE. Our method is a variation of the nonlinear least squares where penalized splines are used to model the functional parameters and the ODE solutions are approximated also using splines. We resort to the implicit function theorem to deal with the nonlinear least squares objective function that is only defined implicitly. The proposed penalized nonlinear least squares method is applied to estimate a HIV dynamic model from a real dataset. Monte Carlo simulations show that the new method can provide much more accurate estimates of functional parameters than the existing two-step local polynomial method which relies on estimation of the derivatives of the state function. Supplemental materials for the article are available online. PMID:23155351

  3. Measurement System Characterization in the Presence of Measurement Errors

    NASA Technical Reports Server (NTRS)

    Commo, Sean A.

    2012-01-01

    In the calibration of a measurement system, data are collected in order to estimate a mathematical model between one or more factors of interest and a response. Ordinary least squares is a method employed to estimate the regression coefficients in the model. The method assumes that the factors are known without error; yet, it is implicitly known that the factors contain some uncertainty. In the literature, this uncertainty is known as measurement error. The measurement error affects both the estimates of the model coefficients and the prediction, or residual, errors. There are some methods, such as orthogonal least squares, that are employed in situations where measurement errors exist, but these methods do not directly incorporate the magnitude of the measurement errors. This research proposes a new method, known as modified least squares, that combines the principles of least squares with knowledge about the measurement errors. This knowledge is expressed in terms of the variance ratio - the ratio of response error variance to measurement error variance.

  4. System and Method for Determining Rate of Rotation Using Brushless DC Motor

    NASA Technical Reports Server (NTRS)

    Howard, David E. (Inventor); Smith, Dennis A. (Inventor)

    2000-01-01

    A system and method are provided for measuring rate of rotation. A brushless DC motor is rotated and produces a back electromagnetic force (emf) on each winding thereof. Each winding's back-emf is squared. The squared outputs associated with each winding are combined, with the square root being taken of such combination, to produce a DC output proportional only to the rate of rotation of the motor's shaft.

  5. Method for interconverting drying and heating times between round and square cross sections of ponderosa pine

    Treesearch

    William T. Simpson

    2005-01-01

    To use small-diameter trees effectively as square timbers, we need to be able to estimate the amount of time it takes for these timbers to air-dry. Since experimental data on estimating air-drying time for small-diameter logs have been developed, this study looked at a way to relate that method to square timbers. Drying times were determined for a group of round cross-...

  6. A simple calculation method for determination of equivalent square field.

    PubMed

    Shafiei, Seyed Ali; Hasanzadeh, Hadi; Shafiei, Seyed Ahmad

    2012-04-01

    Determination of the equivalent square fields for rectangular and shielded fields is of great importance in radiotherapy centers and treatment planning software. This is accomplished using standard tables and empirical formulas. The goal of this paper is to present a formula based on analysis of scatter reduction due to inverse square law to obtain equivalent field. Tables are published by different agencies such as ICRU (International Commission on Radiation Units and measurements), which are based on experimental data; but there exist mathematical formulas that yield the equivalent square field of an irregular rectangular field which are used extensively in computation techniques for dose determination. These processes lead to some complicated and time-consuming formulas for which the current study was designed. In this work, considering the portion of scattered radiation in absorbed dose at a point of measurement, a numerical formula was obtained based on which a simple formula was developed to calculate equivalent square field. Using polar coordinate and inverse square law will lead to a simple formula for calculation of equivalent field. The presented method is an analytical approach based on which one can estimate the equivalent square field of a rectangular field and may be used for a shielded field or an off-axis point. Besides, one can calculate equivalent field of rectangular field with the concept of decreased scatter radiation with inverse square law with a good approximation. This method may be useful in computing Percentage Depth Dose and Tissue-Phantom Ratio which are extensively used in treatment planning.

  7. Multi-element least square HDMR methods and their applications for stochastic multiscale model reduction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, Lijian, E-mail: ljjiang@hnu.edu.cn; Li, Xinping, E-mail: exping@126.com

    Stochastic multiscale modeling has become a necessary approach to quantify uncertainty and characterize multiscale phenomena for many practical problems such as flows in stochastic porous media. The numerical treatment of the stochastic multiscale models can be very challengeable as the existence of complex uncertainty and multiple physical scales in the models. To efficiently take care of the difficulty, we construct a computational reduced model. To this end, we propose a multi-element least square high-dimensional model representation (HDMR) method, through which the random domain is adaptively decomposed into a few subdomains, and a local least square HDMR is constructed in eachmore » subdomain. These local HDMRs are represented by a finite number of orthogonal basis functions defined in low-dimensional random spaces. The coefficients in the local HDMRs are determined using least square methods. We paste all the local HDMR approximations together to form a global HDMR approximation. To further reduce computational cost, we present a multi-element reduced least-square HDMR, which improves both efficiency and approximation accuracy in certain conditions. To effectively treat heterogeneity properties and multiscale features in the models, we integrate multiscale finite element methods with multi-element least-square HDMR for stochastic multiscale model reduction. This approach significantly reduces the original model's complexity in both the resolution of the physical space and the high-dimensional stochastic space. We analyze the proposed approach, and provide a set of numerical experiments to demonstrate the performance of the presented model reduction techniques. - Highlights: • Multi-element least square HDMR is proposed to treat stochastic models. • Random domain is adaptively decomposed into some subdomains to obtain adaptive multi-element HDMR. • Least-square reduced HDMR is proposed to enhance computation efficiency and approximation accuracy in certain conditions. • Integrating MsFEM and multi-element least square HDMR can significantly reduce computation complexity.« less

  8. Pyramidal space frame and associated methods

    DOEpatents

    Clark, Ryan Michael; White, David; Farr, Jr, Adrian Lawrence

    2016-07-19

    A space frame having a high torsional strength comprising a first square bipyramid and two planar structures extending outward from an apex of the first square bipyramid to form a "V" shape is disclosed. Some embodiments comprise a plurality of edge-sharing square bipyramids configured linearly, where the two planar structures contact apexes of all the square bipyramids. A plurality of bridging struts, apex struts, corner struts and optional internal bracing struts increase the strength and rigidity of the space frame. In an embodiment, the space frame supports a solar reflector, such as a parabolic solar reflector. Methods of fabricating and using the space frames are also disclosed.

  9. Spacecraft inertia estimation via constrained least squares

    NASA Technical Reports Server (NTRS)

    Keim, Jason A.; Acikmese, Behcet A.; Shields, Joel F.

    2006-01-01

    This paper presents a new formulation for spacecraft inertia estimation from test data. Specifically, the inertia estimation problem is formulated as a constrained least squares minimization problem with explicit bounds on the inertia matrix incorporated as LMIs [linear matrix inequalities). The resulting minimization problem is a semidefinite optimization that can be solved efficiently with guaranteed convergence to the global optimum by readily available algorithms. This method is applied to data collected from a robotic testbed consisting of a freely rotating body. The results show that the constrained least squares approach produces more accurate estimates of the inertia matrix than standard unconstrained least squares estimation methods.

  10. SIMULATIONS OF 2D AND 3D THERMOCAPILLARY FLOWS BY A LEAST-SQUARES FINITE ELEMENT METHOD. (R825200)

    EPA Science Inventory

    Numerical results for time-dependent 2D and 3D thermocapillary flows are presented in this work. The numerical algorithm is based on the Crank-Nicolson scheme for time integration, Newton's method for linearization, and a least-squares finite element method, together with a matri...

  11. A new linear least squares method for T1 estimation from SPGR signals with multiple TRs

    NASA Astrophysics Data System (ADS)

    Chang, Lin-Ching; Koay, Cheng Guan; Basser, Peter J.; Pierpaoli, Carlo

    2009-02-01

    The longitudinal relaxation time, T1, can be estimated from two or more spoiled gradient recalled echo x (SPGR) images with two or more flip angles and one or more repetition times (TRs). The function relating signal intensity and the parameters are nonlinear; T1 maps can be computed from SPGR signals using nonlinear least squares regression. A widely-used linear method transforms the nonlinear model by assuming a fixed TR in SPGR images. This constraint is not desirable since multiple TRs are a clinically practical way to reduce the total acquisition time, to satisfy the required resolution, and/or to combine SPGR data acquired at different times. A new linear least squares method is proposed using the first order Taylor expansion. Monte Carlo simulations of SPGR experiments are used to evaluate the accuracy and precision of the estimated T1 from the proposed linear and the nonlinear methods. We show that the new linear least squares method provides T1 estimates comparable in both precision and accuracy to those from the nonlinear method, allowing multiple TRs and reducing computation time significantly.

  12. A Least-Squares-Based Weak Galerkin Finite Element Method for Second Order Elliptic Equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mu, Lin; Wang, Junping; Ye, Xiu

    Here, in this article, we introduce a least-squares-based weak Galerkin finite element method for the second order elliptic equation. This new method is shown to provide very accurate numerical approximations for both the primal and the flux variables. In contrast to other existing least-squares finite element methods, this new method allows us to use discontinuous approximating functions on finite element partitions consisting of arbitrary polygon/polyhedron shapes. We also develop a Schur complement algorithm for the resulting discretization problem by eliminating all the unknowns that represent the solution information in the interior of each element. Optimal order error estimates for bothmore » the primal and the flux variables are established. An extensive set of numerical experiments are conducted to demonstrate the robustness, reliability, flexibility, and accuracy of the least-squares-based weak Galerkin finite element method. Finally, the numerical examples cover a wide range of applied problems, including singularly perturbed reaction-diffusion equations and the flow of fluid in porous media with strong anisotropy and heterogeneity.« less

  13. A Least-Squares-Based Weak Galerkin Finite Element Method for Second Order Elliptic Equations

    DOE PAGES

    Mu, Lin; Wang, Junping; Ye, Xiu

    2017-08-17

    Here, in this article, we introduce a least-squares-based weak Galerkin finite element method for the second order elliptic equation. This new method is shown to provide very accurate numerical approximations for both the primal and the flux variables. In contrast to other existing least-squares finite element methods, this new method allows us to use discontinuous approximating functions on finite element partitions consisting of arbitrary polygon/polyhedron shapes. We also develop a Schur complement algorithm for the resulting discretization problem by eliminating all the unknowns that represent the solution information in the interior of each element. Optimal order error estimates for bothmore » the primal and the flux variables are established. An extensive set of numerical experiments are conducted to demonstrate the robustness, reliability, flexibility, and accuracy of the least-squares-based weak Galerkin finite element method. Finally, the numerical examples cover a wide range of applied problems, including singularly perturbed reaction-diffusion equations and the flow of fluid in porous media with strong anisotropy and heterogeneity.« less

  14. Speeding Fermat's factoring method

    NASA Astrophysics Data System (ADS)

    McKee, James

    A factoring method is presented which, heuristically, splits composite n in O(n^{1/4+epsilon}) steps. There are two ideas: an integer approximation to sqrt(q/p) provides an O(n^{1/2+epsilon}) algorithm in which n is represented as the difference of two rational squares; observing that if a prime m divides a square, then m^2 divides that square, a heuristic speed-up to O(n^{1/4+epsilon}) steps is achieved. The method is well-suited for use with small computers: the storage required is negligible, and one never needs to work with numbers larger than n itself.

  15. A unifying theoretical and algorithmic framework for least squares methods of estimation in diffusion tensor imaging.

    PubMed

    Koay, Cheng Guan; Chang, Lin-Ching; Carew, John D; Pierpaoli, Carlo; Basser, Peter J

    2006-09-01

    A unifying theoretical and algorithmic framework for diffusion tensor estimation is presented. Theoretical connections among the least squares (LS) methods, (linear least squares (LLS), weighted linear least squares (WLLS), nonlinear least squares (NLS) and their constrained counterparts), are established through their respective objective functions, and higher order derivatives of these objective functions, i.e., Hessian matrices. These theoretical connections provide new insights in designing efficient algorithms for NLS and constrained NLS (CNLS) estimation. Here, we propose novel algorithms of full Newton-type for the NLS and CNLS estimations, which are evaluated with Monte Carlo simulations and compared with the commonly used Levenberg-Marquardt method. The proposed methods have a lower percent of relative error in estimating the trace and lower reduced chi2 value than those of the Levenberg-Marquardt method. These results also demonstrate that the accuracy of an estimate, particularly in a nonlinear estimation problem, is greatly affected by the Hessian matrix. In other words, the accuracy of a nonlinear estimation is algorithm-dependent. Further, this study shows that the noise variance in diffusion weighted signals is orientation dependent when signal-to-noise ratio (SNR) is low (

  16. Feasibility study on the least square method for fitting non-Gaussian noise data

    NASA Astrophysics Data System (ADS)

    Xu, Wei; Chen, Wen; Liang, Yingjie

    2018-02-01

    This study is to investigate the feasibility of least square method in fitting non-Gaussian noise data. We add different levels of the two typical non-Gaussian noises, Lévy and stretched Gaussian noises, to exact value of the selected functions including linear equations, polynomial and exponential equations, and the maximum absolute and the mean square errors are calculated for the different cases. Lévy and stretched Gaussian distributions have many applications in fractional and fractal calculus. It is observed that the non-Gaussian noises are less accurately fitted than the Gaussian noise, but the stretched Gaussian cases appear to perform better than the Lévy noise cases. It is stressed that the least-squares method is inapplicable to the non-Gaussian noise cases when the noise level is larger than 5%.

  17. Method and structure for cache aware transposition via rectangular subsections

    DOEpatents

    Gustavson, Fred Gehrung; Gunnels, John A

    2014-02-04

    A method and structure for transposing a rectangular matrix A in a computer includes subdividing the rectangular matrix A into one or more square submatrices and executing an in-place transposition for each of the square submatrices A.sub.ij.

  18. Estimation of parameters in rational reaction rates of molecular biological systems via weighted least squares

    NASA Astrophysics Data System (ADS)

    Wu, Fang-Xiang; Mu, Lei; Shi, Zhong-Ke

    2010-01-01

    The models of gene regulatory networks are often derived from statistical thermodynamics principle or Michaelis-Menten kinetics equation. As a result, the models contain rational reaction rates which are nonlinear in both parameters and states. It is challenging to estimate parameters nonlinear in a model although there have been many traditional nonlinear parameter estimation methods such as Gauss-Newton iteration method and its variants. In this article, we develop a two-step method to estimate the parameters in rational reaction rates of gene regulatory networks via weighted linear least squares. This method takes the special structure of rational reaction rates into consideration. That is, in the rational reaction rates, the numerator and the denominator are linear in parameters. By designing a special weight matrix for the linear least squares, parameters in the numerator and the denominator can be estimated by solving two linear least squares problems. The main advantage of the developed method is that it can produce the analytical solutions to the estimation of parameters in rational reaction rates which originally is nonlinear parameter estimation problem. The developed method is applied to a couple of gene regulatory networks. The simulation results show the superior performance over Gauss-Newton method.

  19. An augmented classical least squares method for quantitative Raman spectral analysis against component information loss.

    PubMed

    Zhou, Yan; Cao, Hui

    2013-01-01

    We propose an augmented classical least squares (ACLS) calibration method for quantitative Raman spectral analysis against component information loss. The Raman spectral signals with low analyte concentration correlations were selected and used as the substitutes for unknown quantitative component information during the CLS calibration procedure. The number of selected signals was determined by using the leave-one-out root-mean-square error of cross-validation (RMSECV) curve. An ACLS model was built based on the augmented concentration matrix and the reference spectral signal matrix. The proposed method was compared with partial least squares (PLS) and principal component regression (PCR) using one example: a data set recorded from an experiment of analyte concentration determination using Raman spectroscopy. A 2-fold cross-validation with Venetian blinds strategy was exploited to evaluate the predictive power of the proposed method. The one-way variance analysis (ANOVA) was used to access the predictive power difference between the proposed method and existing methods. Results indicated that the proposed method is effective at increasing the robust predictive power of traditional CLS model against component information loss and its predictive power is comparable to that of PLS or PCR.

  20. 40 CFR Appendix C to Subpart Nnn... - Method for the Determination of Product Density

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... One square foot (12 in. by 12 in.) template, or templates that are multiples of one square foot, for... to the plant's written procedure for the designated product. 3.2Cut samples using one square foot (or multiples of one square foot) template. 3.3Weigh product and obtain area weight (lb/ft2). 3.4Measure sample...

  1. 40 CFR Appendix C to Subpart Nnn... - Method for the Determination of Product Density

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... One square foot (12 in. by 12 in.) template, or templates that are multiples of one square foot, for... to the plant's written procedure for the designated product. 3.2Cut samples using one square foot (or multiples of one square foot) template. 3.3Weigh product and obtain area weight (lb/ft2). 3.4Measure sample...

  2. A fast least-squares algorithm for population inference

    PubMed Central

    2013-01-01

    Background Population inference is an important problem in genetics used to remove population stratification in genome-wide association studies and to detect migration patterns or shared ancestry. An individual’s genotype can be modeled as a probabilistic function of ancestral population memberships, Q, and the allele frequencies in those populations, P. The parameters, P and Q, of this binomial likelihood model can be inferred using slow sampling methods such as Markov Chain Monte Carlo methods or faster gradient based approaches such as sequential quadratic programming. This paper proposes a least-squares simplification of the binomial likelihood model motivated by a Euclidean interpretation of the genotype feature space. This results in a faster algorithm that easily incorporates the degree of admixture within the sample of individuals and improves estimates without requiring trial-and-error tuning. Results We show that the expected value of the least-squares solution across all possible genotype datasets is equal to the true solution when part of the problem has been solved, and that the variance of the solution approaches zero as its size increases. The Least-squares algorithm performs nearly as well as Admixture for these theoretical scenarios. We compare least-squares, Admixture, and FRAPPE for a variety of problem sizes and difficulties. For particularly hard problems with a large number of populations, small number of samples, or greater degree of admixture, least-squares performs better than the other methods. On simulated mixtures of real population allele frequencies from the HapMap project, Admixture estimates sparsely mixed individuals better than Least-squares. The least-squares approach, however, performs within 1.5% of the Admixture error. On individual genotypes from the HapMap project, Admixture and least-squares perform qualitatively similarly and within 1.2% of each other. Significantly, the least-squares approach nearly always converges 1.5- to 6-times faster. Conclusions The computational advantage of the least-squares approach along with its good estimation performance warrants further research, especially for very large datasets. As problem sizes increase, the difference in estimation performance between all algorithms decreases. In addition, when prior information is known, the least-squares approach easily incorporates the expected degree of admixture to improve the estimate. PMID:23343408

  3. A fast least-squares algorithm for population inference.

    PubMed

    Parry, R Mitchell; Wang, May D

    2013-01-23

    Population inference is an important problem in genetics used to remove population stratification in genome-wide association studies and to detect migration patterns or shared ancestry. An individual's genotype can be modeled as a probabilistic function of ancestral population memberships, Q, and the allele frequencies in those populations, P. The parameters, P and Q, of this binomial likelihood model can be inferred using slow sampling methods such as Markov Chain Monte Carlo methods or faster gradient based approaches such as sequential quadratic programming. This paper proposes a least-squares simplification of the binomial likelihood model motivated by a Euclidean interpretation of the genotype feature space. This results in a faster algorithm that easily incorporates the degree of admixture within the sample of individuals and improves estimates without requiring trial-and-error tuning. We show that the expected value of the least-squares solution across all possible genotype datasets is equal to the true solution when part of the problem has been solved, and that the variance of the solution approaches zero as its size increases. The Least-squares algorithm performs nearly as well as Admixture for these theoretical scenarios. We compare least-squares, Admixture, and FRAPPE for a variety of problem sizes and difficulties. For particularly hard problems with a large number of populations, small number of samples, or greater degree of admixture, least-squares performs better than the other methods. On simulated mixtures of real population allele frequencies from the HapMap project, Admixture estimates sparsely mixed individuals better than Least-squares. The least-squares approach, however, performs within 1.5% of the Admixture error. On individual genotypes from the HapMap project, Admixture and least-squares perform qualitatively similarly and within 1.2% of each other. Significantly, the least-squares approach nearly always converges 1.5- to 6-times faster. The computational advantage of the least-squares approach along with its good estimation performance warrants further research, especially for very large datasets. As problem sizes increase, the difference in estimation performance between all algorithms decreases. In addition, when prior information is known, the least-squares approach easily incorporates the expected degree of admixture to improve the estimate.

  4. Three Perspectives on Teaching Least Squares

    ERIC Educational Resources Information Center

    Scariano, Stephen M.; Calzada, Maria

    2004-01-01

    The method of Least Squares is the most widely used technique for fitting a straight line to data, and it is typically discussed in several undergraduate courses. This article focuses on three developmentally different approaches for solving the Least Squares problem that are suitable for classroom exposition.

  5. Highly Compact Circulators in Square-Lattice Photonic Crystal Waveguides

    PubMed Central

    Jin, Xin; Ouyang, Zhengbiao; Wang, Qiong; Lin, Mi; Wen, Guohua; Wang, Jingjing

    2014-01-01

    We propose, demonstrate and investigate highly compact circulators with ultra-low insertion loss in square-lattice- square-rod-photonic-crystal waveguides. Only a single magneto- optical square rod is required to be inserted into the cross center of waveguides, making the structure very compact and ultra efficient. The square rods around the center defect rod are replaced by several right-angled-triangle rods, reducing the insertion loss further and promoting the isolations as well. By choosing a linear-dispersion region and considering the mode patterns in the square magneto-optical rod, the operating mechanism of the circulator is analyzed. By applying the finite-element method together with the Nelder-Mead optimization method, an extremely low insertion loss of 0.02 dB for the transmitted wave and ultra high isolation of 46 dB∼48 dB for the isolated port are obtained. The idea presented can be applied to build circulators in different wavebands, e.g., microwave or Tera-Hertz. PMID:25415417

  6. Highly compact circulators in square-lattice photonic crystal waveguides.

    PubMed

    Jin, Xin; Ouyang, Zhengbiao; Wang, Qiong; Lin, Mi; Wen, Guohua; Wang, Jingjing

    2014-01-01

    We propose, demonstrate and investigate highly compact circulators with ultra-low insertion loss in square-lattice- square-rod-photonic-crystal waveguides. Only a single magneto- optical square rod is required to be inserted into the cross center of waveguides, making the structure very compact and ultra efficient. The square rods around the center defect rod are replaced by several right-angled-triangle rods, reducing the insertion loss further and promoting the isolations as well. By choosing a linear-dispersion region and considering the mode patterns in the square magneto-optical rod, the operating mechanism of the circulator is analyzed. By applying the finite-element method together with the Nelder-Mead optimization method, an extremely low insertion loss of 0.02 dB for the transmitted wave and ultra high isolation of 46 dB∼48 dB for the isolated port are obtained. The idea presented can be applied to build circulators in different wavebands, e.g., microwave or Tera-Hertz.

  7. Parameter estimation using weighted total least squares in the two-compartment exchange model.

    PubMed

    Garpebring, Anders; Löfstedt, Tommy

    2018-01-01

    The linear least squares (LLS) estimator provides a fast approach to parameter estimation in the linearized two-compartment exchange model. However, the LLS method may introduce a bias through correlated noise in the system matrix of the model. The purpose of this work is to present a new estimator for the linearized two-compartment exchange model that takes this noise into account. To account for the noise in the system matrix, we developed an estimator based on the weighted total least squares (WTLS) method. Using simulations, the proposed WTLS estimator was compared, in terms of accuracy and precision, to an LLS estimator and a nonlinear least squares (NLLS) estimator. The WTLS method improved the accuracy compared to the LLS method to levels comparable to the NLLS method. This improvement was at the expense of increased computational time; however, the WTLS was still faster than the NLLS method. At high signal-to-noise ratio all methods provided similar precisions while inconclusive results were observed at low signal-to-noise ratio. The proposed method provides improvements in accuracy compared to the LLS method, however, at an increased computational cost. Magn Reson Med 79:561-567, 2017. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  8. An analysis of the least-squares problem for the DSN systematic pointing error model

    NASA Technical Reports Server (NTRS)

    Alvarez, L. S.

    1991-01-01

    A systematic pointing error model is used to calibrate antennas in the Deep Space Network. The least squares problem is described and analyzed along with the solution methods used to determine the model's parameters. Specifically studied are the rank degeneracy problems resulting from beam pointing error measurement sets that incorporate inadequate sky coverage. A least squares parameter subset selection method is described and its applicability to the systematic error modeling process is demonstrated on Voyager 2 measurement distribution.

  9. A simple calculation method for determination of equivalent square field

    PubMed Central

    Shafiei, Seyed Ali; Hasanzadeh, Hadi; Shafiei, Seyed Ahmad

    2012-01-01

    Determination of the equivalent square fields for rectangular and shielded fields is of great importance in radiotherapy centers and treatment planning software. This is accomplished using standard tables and empirical formulas. The goal of this paper is to present a formula based on analysis of scatter reduction due to inverse square law to obtain equivalent field. Tables are published by different agencies such as ICRU (International Commission on Radiation Units and measurements), which are based on experimental data; but there exist mathematical formulas that yield the equivalent square field of an irregular rectangular field which are used extensively in computation techniques for dose determination. These processes lead to some complicated and time-consuming formulas for which the current study was designed. In this work, considering the portion of scattered radiation in absorbed dose at a point of measurement, a numerical formula was obtained based on which a simple formula was developed to calculate equivalent square field. Using polar coordinate and inverse square law will lead to a simple formula for calculation of equivalent field. The presented method is an analytical approach based on which one can estimate the equivalent square field of a rectangular field and may be used for a shielded field or an off-axis point. Besides, one can calculate equivalent field of rectangular field with the concept of decreased scatter radiation with inverse square law with a good approximation. This method may be useful in computing Percentage Depth Dose and Tissue-Phantom Ratio which are extensively used in treatment planning. PMID:22557801

  10. Generalized Structured Component Analysis

    ERIC Educational Resources Information Center

    Hwang, Heungsun; Takane, Yoshio

    2004-01-01

    We propose an alternative method to partial least squares for path analysis with components, called generalized structured component analysis. The proposed method replaces factors by exact linear combinations of observed variables. It employs a well-defined least squares criterion to estimate model parameters. As a result, the proposed method…

  11. Recursive least squares method of regression coefficients estimation as a special case of Kalman filter

    NASA Astrophysics Data System (ADS)

    Borodachev, S. M.

    2016-06-01

    The simple derivation of recursive least squares (RLS) method equations is given as special case of Kalman filter estimation of a constant system state under changing observation conditions. A numerical example illustrates application of RLS to multicollinearity problem.

  12. Iterative and multigrid methods in the finite element solution of incompressible and turbulent fluid flow

    NASA Astrophysics Data System (ADS)

    Lavery, N.; Taylor, C.

    1999-07-01

    Multigrid and iterative methods are used to reduce the solution time of the matrix equations which arise from the finite element (FE) discretisation of the time-independent equations of motion of the incompressible fluid in turbulent motion. Incompressible flow is solved by using the method of reduce interpolation for the pressure to satisfy the Brezzi-Babuska condition. The k-l model is used to complete the turbulence closure problem. The non-symmetric iterative matrix methods examined are the methods of least squares conjugate gradient (LSCG), biconjugate gradient (BCG), conjugate gradient squared (CGS), and the biconjugate gradient squared stabilised (BCGSTAB). The multigrid algorithm applied is based on the FAS algorithm of Brandt, and uses two and three levels of grids with a V-cycling schedule. These methods are all compared to the non-symmetric frontal solver. Copyright

  13. Storage and computationally efficient permutations of factorized covariance and square-root information matrices

    NASA Technical Reports Server (NTRS)

    Muellerschoen, R. J.

    1988-01-01

    A unified method to permute vector-stored upper-triangular diagonal factorized covariance (UD) and vector stored upper-triangular square-root information filter (SRIF) arrays is presented. The method involves cyclical permutation of the rows and columns of the arrays and retriangularization with appropriate square-root-free fast Givens rotations or elementary slow Givens reflections. A minimal amount of computation is performed and only one scratch vector of size N is required, where N is the column dimension of the arrays. To make the method efficient for large SRIF arrays on a virtual memory machine, three additional scratch vectors each of size N are used to avoid expensive paging faults. The method discussed is compared with the methods and routines of Bierman's Estimation Subroutine Library (ESL).

  14. Numerical modelling of rapid, flow-like landslides across 3-D terrains: a Tsunami Squares approach to El Picacho landslide, El Salvador, September 19, 1982

    NASA Astrophysics Data System (ADS)

    Wang, Jiajia; Ward, Steven N.; Xiao, Lili

    2015-06-01

    Flow-like landslides are rapidly moving fluid-solid mixtures that can cause significant destruction along paths that run far from their original sources. Existing models for run out prediction and motion simulation of flow-like landslides have many limitations. In this paper, we develop a new method named `Tsunami Squares' to simulate the generation, propagation and stoppage of flow-like landslides based on conservation of volume and momentum. Landslide materials in the new method form divisible squares that are displaced, then further fractured. The squares move under the influence of gravity-driven acceleration and suffer decelerations due to basal and dynamic frictions. Distinctively, this method takes into account solid and fluid mechanics, particle interactions and flow regime transitions. We apply this approach to simulate the 1982 El Picacho landslide in San Salvador, capital city of El Salvador. Landslide products from Tsunami Squares such as run out distance, velocities, erosion and deposition depths and impacted area agree well with field investigated and eyewitness data.

  15. An improved partial least-squares regression method for Raman spectroscopy

    NASA Astrophysics Data System (ADS)

    Momenpour Tehran Monfared, Ali; Anis, Hanan

    2017-10-01

    It is known that the performance of partial least-squares (PLS) regression analysis can be improved using the backward variable selection method (BVSPLS). In this paper, we further improve the BVSPLS based on a novel selection mechanism. The proposed method is based on sorting the weighted regression coefficients, and then the importance of each variable of the sorted list is evaluated using root mean square errors of prediction (RMSEP) criterion in each iteration step. Our Improved BVSPLS (IBVSPLS) method has been applied to leukemia and heparin data sets and led to an improvement in limit of detection of Raman biosensing ranged from 10% to 43% compared to PLS. Our IBVSPLS was also compared to the jack-knifing (simpler) and Genetic Algorithm (more complex) methods. Our method was consistently better than the jack-knifing method and showed either a similar or a better performance compared to the genetic algorithm.

  16. Accuracy of least-squares methods for the Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Bochev, Pavel B.; Gunzburger, Max D.

    1993-01-01

    Recently there has been substantial interest in least-squares finite element methods for velocity-vorticity-pressure formulations of the incompressible Navier-Stokes equations. The main cause for this interest is the fact that algorithms for the resulting discrete equations can be devised which require the solution of only symmetric, positive definite systems of algebraic equations. On the other hand, it is well-documented that methods using the vorticity as a primary variable often yield very poor approximations. Thus, here we study the accuracy of these methods through a series of computational experiments, and also comment on theoretical error estimates. It is found, despite the failure of standard methods for deriving error estimates, that computational evidence suggests that these methods are, at the least, nearly optimally accurate. Thus, in addition to the desirable matrix properties yielded by least-squares methods, one also obtains accurate approximations.

  17. Least-squares finite element methods for compressible Euler equations

    NASA Technical Reports Server (NTRS)

    Jiang, Bo-Nan; Carey, G. F.

    1990-01-01

    A method based on backward finite differencing in time and a least-squares finite element scheme for first-order systems of partial differential equations in space is applied to the Euler equations for gas dynamics. The scheme minimizes the L-sq-norm of the residual within each time step. The method naturally generates numerical dissipation proportional to the time step size. An implicit method employing linear elements has been implemented and proves robust. For high-order elements, computed solutions based on the L-sq method may have oscillations for calculations at similar time step sizes. To overcome this difficulty, a scheme which minimizes the weighted H1-norm of the residual is proposed and leads to a successful scheme with high-degree elements. Finally, a conservative least-squares finite element method is also developed. Numerical results for two-dimensional problems are given to demonstrate the shock resolution of the methods and compare different approaches.

  18. Comparison between results of solution of Burgers' equation and Laplace's equation by Galerkin and least-square finite element methods

    NASA Astrophysics Data System (ADS)

    Adib, Arash; Poorveis, Davood; Mehraban, Farid

    2018-03-01

    In this research, two equations are considered as examples of hyperbolic and elliptic equations. In addition, two finite element methods are applied for solving of these equations. The purpose of this research is the selection of suitable method for solving each of two equations. Burgers' equation is a hyperbolic equation. This equation is a pure advection (without diffusion) equation. This equation is one-dimensional and unsteady. A sudden shock wave is introduced to the model. This wave moves without deformation. In addition, Laplace's equation is an elliptical equation. This equation is steady and two-dimensional. The solution of Laplace's equation in an earth dam is considered. By solution of Laplace's equation, head pressure and the value of seepage in the directions X and Y are calculated in different points of earth dam. At the end, water table is shown in the earth dam. For Burgers' equation, least-square method can show movement of wave with oscillation but Galerkin method can not show it correctly (the best method for solving of the Burgers' equation is discrete space by least-square finite element method and discrete time by forward difference.). For Laplace's equation, Galerkin and least square methods can show water table correctly in earth dam.

  19. Estimators of The Magnitude-Squared Spectrum and Methods for Incorporating SNR Uncertainty

    PubMed Central

    Lu, Yang; Loizou, Philipos C.

    2011-01-01

    Statistical estimators of the magnitude-squared spectrum are derived based on the assumption that the magnitude-squared spectrum of the noisy speech signal can be computed as the sum of the (clean) signal and noise magnitude-squared spectra. Maximum a posterior (MAP) and minimum mean square error (MMSE) estimators are derived based on a Gaussian statistical model. The gain function of the MAP estimator was found to be identical to the gain function used in the ideal binary mask (IdBM) that is widely used in computational auditory scene analysis (CASA). As such, it was binary and assumed the value of 1 if the local SNR exceeded 0 dB, and assumed the value of 0 otherwise. By modeling the local instantaneous SNR as an F-distributed random variable, soft masking methods were derived incorporating SNR uncertainty. The soft masking method, in particular, which weighted the noisy magnitude-squared spectrum by the a priori probability that the local SNR exceeds 0 dB was shown to be identical to the Wiener gain function. Results indicated that the proposed estimators yielded significantly better speech quality than the conventional MMSE spectral power estimators, in terms of yielding lower residual noise and lower speech distortion. PMID:21886543

  20. Use of partial least squares regression to impute SNP genotypes in Italian cattle breeds.

    PubMed

    Dimauro, Corrado; Cellesi, Massimo; Gaspa, Giustino; Ajmone-Marsan, Paolo; Steri, Roberto; Marras, Gabriele; Macciotta, Nicolò P P

    2013-06-05

    The objective of the present study was to test the ability of the partial least squares regression technique to impute genotypes from low density single nucleotide polymorphisms (SNP) panels i.e. 3K or 7K to a high density panel with 50K SNP. No pedigree information was used. Data consisted of 2093 Holstein, 749 Brown Swiss and 479 Simmental bulls genotyped with the Illumina 50K Beadchip. First, a single-breed approach was applied by using only data from Holstein animals. Then, to enlarge the training population, data from the three breeds were combined and a multi-breed analysis was performed. Accuracies of genotypes imputed using the partial least squares regression method were compared with those obtained by using the Beagle software. The impact of genotype imputation on breeding value prediction was evaluated for milk yield, fat content and protein content. In the single-breed approach, the accuracy of imputation using partial least squares regression was around 90 and 94% for the 3K and 7K platforms, respectively; corresponding accuracies obtained with Beagle were around 85% and 90%. Moreover, computing time required by the partial least squares regression method was on average around 10 times lower than computing time required by Beagle. Using the partial least squares regression method in the multi-breed resulted in lower imputation accuracies than using single-breed data. The impact of the SNP-genotype imputation on the accuracy of direct genomic breeding values was small. The correlation between estimates of genetic merit obtained by using imputed versus actual genotypes was around 0.96 for the 7K chip. Results of the present work suggested that the partial least squares regression imputation method could be useful to impute SNP genotypes when pedigree information is not available.

  1. Least squares estimation of avian molt rates

    USGS Publications Warehouse

    Johnson, D.H.

    1989-01-01

    A straightforward least squares method of estimating the rate at which birds molt feathers is presented, suitable for birds captured more than once during the period of molt. The date of molt onset can also be estimated. The method is applied to male and female mourning doves.

  2. Penalized Multi-Way Partial Least Squares for Smooth Trajectory Decoding from Electrocorticographic (ECoG) Recording

    PubMed Central

    Eliseyev, Andrey; Aksenova, Tetiana

    2016-01-01

    In the current paper the decoding algorithms for motor-related BCI systems for continuous upper limb trajectory prediction are considered. Two methods for the smooth prediction, namely Sobolev and Polynomial Penalized Multi-Way Partial Least Squares (PLS) regressions, are proposed. The methods are compared to the Multi-Way Partial Least Squares and Kalman Filter approaches. The comparison demonstrated that the proposed methods combined the prediction accuracy of the algorithms of the PLS family and trajectory smoothness of the Kalman Filter. In addition, the prediction delay is significantly lower for the proposed algorithms than for the Kalman Filter approach. The proposed methods could be applied in a wide range of applications beyond neuroscience. PMID:27196417

  3. Effect of genetic algorithm as a variable selection method on different chemometric models applied for the analysis of binary mixture of amoxicillin and flucloxacillin: A comparative study

    NASA Astrophysics Data System (ADS)

    Attia, Khalid A. M.; Nassar, Mohammed W. I.; El-Zeiny, Mohamed B.; Serag, Ahmed

    2016-03-01

    Different chemometric models were applied for the quantitative analysis of amoxicillin (AMX), and flucloxacillin (FLX) in their binary mixtures, namely, partial least squares (PLS), spectral residual augmented classical least squares (SRACLS), concentration residual augmented classical least squares (CRACLS) and artificial neural networks (ANNs). All methods were applied with and without variable selection procedure (genetic algorithm GA). The methods were used for the quantitative analysis of the drugs in laboratory prepared mixtures and real market sample via handling the UV spectral data. Robust and simpler models were obtained by applying GA. The proposed methods were found to be rapid, simple and required no preliminary separation steps.

  4. Multiway analysis methods applied to the fluorescence excitation-emission dataset for the simultaneous quantification of valsartan and amlodipine in tablets

    NASA Astrophysics Data System (ADS)

    Dinç, Erdal; Ertekin, Zehra Ceren; Büker, Eda

    2017-09-01

    In this study, excitation-emission matrix datasets, which have strong overlapping bands, were processed by using four different chemometric calibration algorithms consisting of parallel factor analysis, Tucker3, three-way partial least squares and unfolded partial least squares for the simultaneous quantitative estimation of valsartan and amlodipine besylate in tablets. In analyses, preliminary separation step was not used before the application of parallel factor analysis Tucker3, three-way partial least squares and unfolded partial least squares approaches for the analysis of the related drug substances in samples. Three-way excitation-emission matrix data array was obtained by concatenating excitation-emission matrices of the calibration set, validation set, and commercial tablet samples. The excitation-emission matrix data array was used to get parallel factor analysis, Tucker3, three-way partial least squares and unfolded partial least squares calibrations and to predict the amounts of valsartan and amlodipine besylate in samples. For all the methods, calibration and prediction of valsartan and amlodipine besylate were performed in the working concentration ranges of 0.25-4.50 μg/mL. The validity and the performance of all the proposed methods were checked by using the validation parameters. From the analysis results, it was concluded that the described two-way and three-way algorithmic methods were very useful for the simultaneous quantitative resolution and routine analysis of the related drug substances in marketed samples.

  5. Least Median of Squares Filtering of Locally Optimal Point Matches for Compressible Flow Image Registration

    PubMed Central

    Castillo, Edward; Castillo, Richard; White, Benjamin; Rojo, Javier; Guerrero, Thomas

    2012-01-01

    Compressible flow based image registration operates under the assumption that the mass of the imaged material is conserved from one image to the next. Depending on how the mass conservation assumption is modeled, the performance of existing compressible flow methods is limited by factors such as image quality, noise, large magnitude voxel displacements, and computational requirements. The Least Median of Squares Filtered Compressible Flow (LFC) method introduced here is based on a localized, nonlinear least squares, compressible flow model that describes the displacement of a single voxel that lends itself to a simple grid search (block matching) optimization strategy. Spatially inaccurate grid search point matches, corresponding to erroneous local minimizers of the nonlinear compressible flow model, are removed by a novel filtering approach based on least median of squares fitting and the forward search outlier detection method. The spatial accuracy of the method is measured using ten thoracic CT image sets and large samples of expert determined landmarks (available at www.dir-lab.com). The LFC method produces an average error within the intra-observer error on eight of the ten cases, indicating that the method is capable of achieving a high spatial accuracy for thoracic CT registration. PMID:22797602

  6. A simplified fractional order impedance model and parameter identification method for lithium-ion batteries

    PubMed Central

    Yang, Qingxia; Xu, Jun; Cao, Binggang; Li, Xiuqing

    2017-01-01

    Identification of internal parameters of lithium-ion batteries is a useful tool to evaluate battery performance, and requires an effective model and algorithm. Based on the least square genetic algorithm, a simplified fractional order impedance model for lithium-ion batteries and the corresponding parameter identification method were developed. The simplified model was derived from the analysis of the electrochemical impedance spectroscopy data and the transient response of lithium-ion batteries with different states of charge. In order to identify the parameters of the model, an equivalent tracking system was established, and the method of least square genetic algorithm was applied using the time-domain test data. Experiments and computer simulations were carried out to verify the effectiveness and accuracy of the proposed model and parameter identification method. Compared with a second-order resistance-capacitance (2-RC) model and recursive least squares method, small tracing voltage fluctuations were observed. The maximum battery voltage tracing error for the proposed model and parameter identification method is within 0.5%; this demonstrates the good performance of the model and the efficiency of the least square genetic algorithm to estimate the internal parameters of lithium-ion batteries. PMID:28212405

  7. Evaluation of the Bitterness of Traditional Chinese Medicines using an E-Tongue Coupled with a Robust Partial Least Squares Regression Method.

    PubMed

    Lin, Zhaozhou; Zhang, Qiao; Liu, Ruixin; Gao, Xiaojie; Zhang, Lu; Kang, Bingya; Shi, Junhan; Wu, Zidan; Gui, Xinjing; Li, Xuelin

    2016-01-25

    To accurately, safely, and efficiently evaluate the bitterness of Traditional Chinese Medicines (TCMs), a robust predictor was developed using robust partial least squares (RPLS) regression method based on data obtained from an electronic tongue (e-tongue) system. The data quality was verified by the Grubb's test. Moreover, potential outliers were detected based on both the standardized residual and score distance calculated for each sample. The performance of RPLS on the dataset before and after outlier detection was compared to other state-of-the-art methods including multivariate linear regression, least squares support vector machine, and the plain partial least squares regression. Both R² and root-mean-squares error (RMSE) of cross-validation (CV) were recorded for each model. With four latent variables, a robust RMSECV value of 0.3916 with bitterness values ranging from 0.63 to 4.78 were obtained for the RPLS model that was constructed based on the dataset including outliers. Meanwhile, the RMSECV, which was calculated using the models constructed by other methods, was larger than that of the RPLS model. After six outliers were excluded, the performance of all benchmark methods markedly improved, but the difference between the RPLS model constructed before and after outlier exclusion was negligible. In conclusion, the bitterness of TCM decoctions can be accurately evaluated with the RPLS model constructed using e-tongue data.

  8. Quantized kernel least mean square algorithm.

    PubMed

    Chen, Badong; Zhao, Songlin; Zhu, Pingping; Príncipe, José C

    2012-01-01

    In this paper, we propose a quantization approach, as an alternative of sparsification, to curb the growth of the radial basis function structure in kernel adaptive filtering. The basic idea behind this method is to quantize and hence compress the input (or feature) space. Different from sparsification, the new approach uses the "redundant" data to update the coefficient of the closest center. In particular, a quantized kernel least mean square (QKLMS) algorithm is developed, which is based on a simple online vector quantization method. The analytical study of the mean square convergence has been carried out. The energy conservation relation for QKLMS is established, and on this basis we arrive at a sufficient condition for mean square convergence, and a lower and upper bound on the theoretical value of the steady-state excess mean square error. Static function estimation and short-term chaotic time-series prediction examples are presented to demonstrate the excellent performance.

  9. Speckle noise removal applied to ultrasound image of carotid artery based on total least squares model.

    PubMed

    Yang, Lei; Lu, Jun; Dai, Ming; Ren, Li-Jie; Liu, Wei-Zong; Li, Zhen-Zhou; Gong, Xue-Hao

    2016-10-06

    An ultrasonic image speckle noise removal method by using total least squares model is proposed and applied onto images of cardiovascular structures such as the carotid artery. On the basis of the least squares principle, the related principle of minimum square method is applied to cardiac ultrasound image speckle noise removal process to establish the model of total least squares, orthogonal projection transformation processing is utilized for the output of the model, and the denoising processing for the cardiac ultrasound image speckle noise is realized. Experimental results show that the improved algorithm can greatly improve the resolution of the image, and meet the needs of clinical medical diagnosis and treatment of the cardiovascular system for the head and neck. Furthermore, the success in imaging of carotid arteries has strong implications in neurological complications such as stroke.

  10. Two Enhancements of the Logarithmic Least-Squares Method for Analyzing Subjective Comparisons

    DTIC Science & Technology

    1989-03-25

    error term. 1 For this model, the total sum of squares ( SSTO ), defined as n 2 SSTO = E (yi y) i=1 can be partitioned into error and regression sums...of the regression line around the mean value. Mathematically, for the model given by equation A.4, SSTO = SSE + SSR (A.6) A-4 where SSTO is the total...sum of squares (i.e., the variance of the yi’s), SSE is error sum of squares, and SSR is the regression sum of squares. SSTO , SSE, and SSR are given

  11. On the method of least squares. II. [for calculation of covariance matrices and optimization algorithms

    NASA Technical Reports Server (NTRS)

    Jefferys, W. H.

    1981-01-01

    A least squares method proposed previously for solving a general class of problems is expanded in two ways. First, covariance matrices related to the solution are calculated and their interpretation is given. Second, improved methods of solving the normal equations related to those of Marquardt (1963) and Fletcher and Powell (1963) are developed for this approach. These methods may converge in cases where Newton's method diverges or converges slowly.

  12. Multiplier less high-speed squaring circuit for binary numbers

    NASA Astrophysics Data System (ADS)

    Sethi, Kabiraj; Panda, Rutuparna

    2015-03-01

    The squaring operation is important in many applications in signal processing, cryptography etc. In general, squaring circuits reported in the literature use fast multipliers. A novel idea of a squaring circuit without using multipliers is proposed in this paper. Ancient Indian method used for squaring decimal numbers is extended here for binary numbers. The key to our success is that no multiplier is used. Instead, one squaring circuit is used. The hardware architecture of the proposed squaring circuit is presented. The design is coded in VHDL and synthesised and simulated in Xilinx ISE Design Suite 10.1 (Xilinx Inc., San Jose, CA, USA). It is implemented in Xilinx Vertex 4vls15sf363-12 device (Xilinx Inc.). The results in terms of time delay and area is compared with both modified Booth's algorithm and squaring circuit using Vedic multipliers. Our proposed squaring circuit seems to have better performance in terms of both speed and area.

  13. DIFFERENTIATION OF AURANTII FRUCTUS IMMATURUS AND FRUCTUS PONICIRI TRIFOLIATAE IMMATURUS BY FLOW-INJECTION WITH ULTRAVIOLET SPECTROSCOPIC DETECTION AND PROTON NUCLEAR MAGNETIC RESONANCE USING PARTIAL LEAST-SQUARES DISCRIMINANT ANALYSIS.

    PubMed

    Zhang, Mengliang; Zhao, Yang; Harrington, Peter de B; Chen, Pei

    2016-03-01

    Two simple fingerprinting methods, flow-injection coupled to ultraviolet spectroscopy and proton nuclear magnetic resonance, were used for discriminating between Aurantii fructus immaturus and Fructus poniciri trifoliatae immaturus . Both methods were combined with partial least-squares discriminant analysis. In the flow-injection method, four data representations were evaluated: total ultraviolet absorbance chromatograms, averaged ultraviolet spectra, absorbance at 193, 205, 225, and 283 nm, and absorbance at 225 and 283 nm. Prediction rates of 100% were achieved for all data representations by partial least-squares discriminant analysis using leave-one-sample-out cross-validation. The prediction rate for the proton nuclear magnetic resonance data by partial least-squares discriminant analysis with leave-one-sample-out cross-validation was also 100%. A new validation set of data was collected by flow-injection with ultraviolet spectroscopic detection two weeks later and predicted by partial least-squares discriminant analysis models constructed by the initial data representations with no parameter changes. The classification rates were 95% with the total ultraviolet absorbance chromatograms datasets and 100% with the other three datasets. Flow-injection with ultraviolet detection and proton nuclear magnetic resonance are simple, high throughput, and low-cost methods for discrimination studies.

  14. An analytical method to calculate equivalent fields to irregular symmetric and asymmetric photon fields.

    PubMed

    Tahmasebi Birgani, Mohamad J; Chegeni, Nahid; Zabihzadeh, Mansoor; Hamzian, Nima

    2014-01-01

    Equivalent field is frequently used for central axis depth-dose calculations of rectangular- and irregular-shaped photon beams. As most of the proposed models to calculate the equivalent square field are dosimetry based, a simple physical-based method to calculate the equivalent square field size was used as the basis of this study. The table of the sides of the equivalent square or rectangular fields was constructed and then compared with the well-known tables by BJR and Venselaar, et al. with the average relative error percentage of 2.5 ± 2.5% and 1.5 ± 1.5%, respectively. To evaluate the accuracy of this method, the percentage depth doses (PDDs) were measured for some special irregular symmetric and asymmetric treatment fields and their equivalent squares for Siemens Primus Plus linear accelerator for both energies, 6 and 18MV. The mean relative differences of PDDs measurement for these fields and their equivalent square was approximately 1% or less. As a result, this method can be employed to calculate equivalent field not only for rectangular fields but also for any irregular symmetric or asymmetric field. © 2013 American Association of Medical Dosimetrists Published by American Association of Medical Dosimetrists All rights reserved.

  15. A Comparison of Normal and Elliptical Estimation Methods in Structural Equation Models.

    ERIC Educational Resources Information Center

    Schumacker, Randall E.; Cheevatanarak, Suchittra

    Monte Carlo simulation compared chi-square statistics, parameter estimates, and root mean square error of approximation values using normal and elliptical estimation methods. Three research conditions were imposed on the simulated data: sample size, population contamination percent, and kurtosis. A Bentler-Weeks structural model established the…

  16. 2-D weighted least-squares phase unwrapping

    DOEpatents

    Ghiglia, Dennis C.; Romero, Louis A.

    1995-01-01

    Weighted values of interferometric signals are unwrapped by determining the least squares solution of phase unwrapping for unweighted values of the interferometric signals; and then determining the least squares solution of phase unwrapping for weighted values of the interferometric signals by preconditioned conjugate gradient methods using the unweighted solutions as preconditioning values. An output is provided that is representative of the least squares solution of phase unwrapping for weighted values of the interferometric signals.

  17. 2-D weighted least-squares phase unwrapping

    DOEpatents

    Ghiglia, D.C.; Romero, L.A.

    1995-06-13

    Weighted values of interferometric signals are unwrapped by determining the least squares solution of phase unwrapping for unweighted values of the interferometric signals; and then determining the least squares solution of phase unwrapping for weighted values of the interferometric signals by preconditioned conjugate gradient methods using the unweighted solutions as preconditioning values. An output is provided that is representative of the least squares solution of phase unwrapping for weighted values of the interferometric signals. 6 figs.

  18. A weak Galerkin least-squares finite element method for div-curl systems

    NASA Astrophysics Data System (ADS)

    Li, Jichun; Ye, Xiu; Zhang, Shangyou

    2018-06-01

    In this paper, we introduce a weak Galerkin least-squares method for solving div-curl problem. This finite element method leads to a symmetric positive definite system and has the flexibility to work with general meshes such as hybrid mesh, polytopal mesh and mesh with hanging nodes. Error estimates of the finite element solution are derived. The numerical examples demonstrate the robustness and flexibility of the proposed method.

  19. Kernel Partial Least Squares for Nonlinear Regression and Discrimination

    NASA Technical Reports Server (NTRS)

    Rosipal, Roman; Clancy, Daniel (Technical Monitor)

    2002-01-01

    This paper summarizes recent results on applying the method of partial least squares (PLS) in a reproducing kernel Hilbert space (RKHS). A previously proposed kernel PLS regression model was proven to be competitive with other regularized regression methods in RKHS. The family of nonlinear kernel-based PLS models is extended by considering the kernel PLS method for discrimination. Theoretical and experimental results on a two-class discrimination problem indicate usefulness of the method.

  20. Calibration and compensation method of three-axis geomagnetic sensor based on pre-processing total least square iteration

    NASA Astrophysics Data System (ADS)

    Zhou, Y.; Zhang, X.; Xiao, W.

    2018-04-01

    As the geomagnetic sensor is susceptible to interference, a pre-processing total least square iteration method is proposed for calibration compensation. Firstly, the error model of the geomagnetic sensor is analyzed and the correction model is proposed, then the characteristics of the model are analyzed and converted into nine parameters. The geomagnetic data is processed by Hilbert transform (HHT) to improve the signal-to-noise ratio, and the nine parameters are calculated by using the combination of Newton iteration method and the least squares estimation method. The sifter algorithm is used to filter the initial value of the iteration to ensure that the initial error is as small as possible. The experimental results show that this method does not need additional equipment and devices, can continuously update the calibration parameters, and better than the two-step estimation method, it can compensate geomagnetic sensor error well.

  1. Solution of a few nonlinear problems in aerodynamics by the finite elements and functional least squares methods. Ph.D. Thesis - Paris Univ.; [mathematical models of transonic flow using nonlinear equations

    NASA Technical Reports Server (NTRS)

    Periaux, J.

    1979-01-01

    The numerical simulation of the transonic flows of idealized fluids and of incompressible viscous fluids, by the nonlinear least squares methods is presented. The nonlinear equations, the boundary conditions, and the various constraints controlling the two types of flow are described. The standard iterative methods for solving a quasi elliptical nonlinear equation with partial derivatives are reviewed with emphasis placed on two examples: the fixed point method applied to the Gelder functional in the case of compressible subsonic flows and the Newton method used in the technique of decomposition of the lifting potential. The new abstract least squares method is discussed. It consists of substituting the nonlinear equation by a problem of minimization in a H to the minus 1 type Sobolev functional space.

  2. Use of inequality constrained least squares estimation in small area estimation

    NASA Astrophysics Data System (ADS)

    Abeygunawardana, R. A. B.; Wickremasinghe, W. N.

    2017-05-01

    Traditional surveys provide estimates that are based only on the sample observations collected for the population characteristic of interest. However, these estimates may have unacceptably large variance for certain domains. Small Area Estimation (SAE) deals with determining precise and accurate estimates for population characteristics of interest for such domains. SAE usually uses least squares or maximum likelihood procedures incorporating prior information and current survey data. Many available methods in SAE use constraints in equality form. However there are practical situations where certain inequality restrictions on model parameters are more realistic. It will lead to Inequality Constrained Least Squares (ICLS) estimates if the method used is least squares. In this study ICLS estimation procedure is applied to many proposed small area estimates.

  3. Inter-class sparsity based discriminative least square regression.

    PubMed

    Wen, Jie; Xu, Yong; Li, Zuoyong; Ma, Zhongli; Xu, Yuanrong

    2018-06-01

    Least square regression is a very popular supervised classification method. However, two main issues greatly limit its performance. The first one is that it only focuses on fitting the input features to the corresponding output labels while ignoring the correlations among samples. The second one is that the used label matrix, i.e., zero-one label matrix is inappropriate for classification. To solve these problems and improve the performance, this paper presents a novel method, i.e., inter-class sparsity based discriminative least square regression (ICS_DLSR), for multi-class classification. Different from other methods, the proposed method pursues that the transformed samples have a common sparsity structure in each class. For this goal, an inter-class sparsity constraint is introduced to the least square regression model such that the margins of samples from the same class can be greatly reduced while those of samples from different classes can be enlarged. In addition, an error term with row-sparsity constraint is introduced to relax the strict zero-one label matrix, which allows the method to be more flexible in learning the discriminative transformation matrix. These factors encourage the method to learn a more compact and discriminative transformation for regression and thus has the potential to perform better than other methods. Extensive experimental results show that the proposed method achieves the best performance in comparison with other methods for multi-class classification. Copyright © 2018 Elsevier Ltd. All rights reserved.

  4. Examinations of electron temperature calculation methods in Thomson scattering diagnostics.

    PubMed

    Oh, Seungtae; Lee, Jong Ha; Wi, Hanmin

    2012-10-01

    Electron temperature from Thomson scattering diagnostic is derived through indirect calculation based on theoretical model. χ-square test is commonly used in the calculation, and the reliability of the calculation method highly depends on the noise level of input signals. In the simulations, noise effects of the χ-square test are examined and scale factor test is proposed as an alternative method.

  5. Sampling strategies for square and boll-feeding plant bugs (Hemiptera: Miridae) occurring on cotton

    USDA-ARS?s Scientific Manuscript database

    Six sampling methods targeting square and boll-feeding plant bugs on cotton were compared during three cotton growth periods (early-season squaring, early bloom, and peak through late bloom) by samplers differing in experience (with prior years of sampling experience or no experience) along the coas...

  6. Assessing Fit and Dimensionality in Least Squares Metric Multidimensional Scaling Using Akaike's Information Criterion

    ERIC Educational Resources Information Center

    Ding, Cody S.; Davison, Mark L.

    2010-01-01

    Akaike's information criterion is suggested as a tool for evaluating fit and dimensionality in metric multidimensional scaling that uses least squares methods of estimation. This criterion combines the least squares loss function with the number of estimated parameters. Numerical examples are presented. The results from analyses of both simulation…

  7. An Alternating Least Squares Method for the Weighted Approximation of a Symmetric Matrix.

    ERIC Educational Resources Information Center

    ten Berge, Jos M. F.; Kiers, Henk A. L.

    1993-01-01

    R. A. Bailey and J. C. Gower explored approximating a symmetric matrix "B" by another, "C," in the least squares sense when the squared discrepancies for diagonal elements receive specific nonunit weights. A solution is proposed where "C" is constrained to be positive semidefinite and of a fixed rank. (SLD)

  8. Hybrid least squares multivariate spectral analysis methods

    DOEpatents

    Haaland, David M.

    2002-01-01

    A set of hybrid least squares multivariate spectral analysis methods in which spectral shapes of components or effects not present in the original calibration step are added in a following estimation or calibration step to improve the accuracy of the estimation of the amount of the original components in the sampled mixture. The "hybrid" method herein means a combination of an initial classical least squares analysis calibration step with subsequent analysis by an inverse multivariate analysis method. A "spectral shape" herein means normally the spectral shape of a non-calibrated chemical component in the sample mixture but can also mean the spectral shapes of other sources of spectral variation, including temperature drift, shifts between spectrometers, spectrometer drift, etc. The "shape" can be continuous, discontinuous, or even discrete points illustrative of the particular effect.

  9. Parametric output-only identification of time-varying structures using a kernel recursive extended least squares TARMA approach

    NASA Astrophysics Data System (ADS)

    Ma, Zhi-Sai; Liu, Li; Zhou, Si-Da; Yu, Lei; Naets, Frank; Heylen, Ward; Desmet, Wim

    2018-01-01

    The problem of parametric output-only identification of time-varying structures in a recursive manner is considered. A kernelized time-dependent autoregressive moving average (TARMA) model is proposed by expanding the time-varying model parameters onto the basis set of kernel functions in a reproducing kernel Hilbert space. An exponentially weighted kernel recursive extended least squares TARMA identification scheme is proposed, and a sliding-window technique is subsequently applied to fix the computational complexity for each consecutive update, allowing the method to operate online in time-varying environments. The proposed sliding-window exponentially weighted kernel recursive extended least squares TARMA method is employed for the identification of a laboratory time-varying structure consisting of a simply supported beam and a moving mass sliding on it. The proposed method is comparatively assessed against an existing recursive pseudo-linear regression TARMA method via Monte Carlo experiments and shown to be capable of accurately tracking the time-varying dynamics. Furthermore, the comparisons demonstrate the superior achievable accuracy, lower computational complexity and enhanced online identification capability of the proposed kernel recursive extended least squares TARMA approach.

  10. A New Global Regression Analysis Method for the Prediction of Wind Tunnel Model Weight Corrections

    NASA Technical Reports Server (NTRS)

    Ulbrich, Norbert Manfred; Bridge, Thomas M.; Amaya, Max A.

    2014-01-01

    A new global regression analysis method is discussed that predicts wind tunnel model weight corrections for strain-gage balance loads during a wind tunnel test. The method determines corrections by combining "wind-on" model attitude measurements with least squares estimates of the model weight and center of gravity coordinates that are obtained from "wind-off" data points. The method treats the least squares fit of the model weight separate from the fit of the center of gravity coordinates. Therefore, it performs two fits of "wind- off" data points and uses the least squares estimator of the model weight as an input for the fit of the center of gravity coordinates. Explicit equations for the least squares estimators of the weight and center of gravity coordinates are derived that simplify the implementation of the method in the data system software of a wind tunnel. In addition, recommendations for sets of "wind-off" data points are made that take typical model support system constraints into account. Explicit equations of the confidence intervals on the model weight and center of gravity coordinates and two different error analyses of the model weight prediction are also discussed in the appendices of the paper.

  11. Tracing the conformational changes in BSA using FRET with environmentally-sensitive squaraine probes

    NASA Astrophysics Data System (ADS)

    Govor, Iryna V.; Tatarets, Anatoliy L.; Obukhova, Olena M.; Terpetschnig, Ewald A.; Gellerman, Gary; Patsenker, Leonid D.

    2016-06-01

    A new potential method of detecting the conformational changes in hydrophobic proteins such as bovine serum albumin (BSA) is introduced. The method is based on the change in the Förster resonance energy transfer (FRET) efficiency between protein-sensitive fluorescent probes. As compared to conventional FRET based methods, in this new approach the donor and acceptor dyes are not covalently linked to protein molecules. Performance of the new method is demonstrated using the protein-sensitive squaraine probes Square-634 (donor) and Square-685 (acceptor) to detect the urea-induced conformational changes of BSA. The FRET efficiency between these probes can be considered a more sensitive parameter to trace protein unfolding as compared to the changes in fluorescence intensity of each of these probes. Addition of urea followed by BSA unfolding causes a noticeable decrease in the emission intensities of these probes (factor of 5.6 for Square-634 and 3.0 for Square-685), and the FRET efficiency changes by a factor of up to 17. Compared to the conventional method the new approach therefore demonstrates to be a more sensitive way to detect the conformational changes in BSA.

  12. Assessment of Gait Characteristics in Total Knee Arthroplasty Patients Using a Hierarchical Partial Least Squares Method.

    PubMed

    Wang, Wei; Ackland, David C; McClelland, Jodie A; Webster, Kate E; Halgamuge, Saman

    2018-01-01

    Quantitative gait analysis is an important tool in objective assessment and management of total knee arthroplasty (TKA) patients. Studies evaluating gait patterns in TKA patients have tended to focus on discrete data such as spatiotemporal information, joint range of motion and peak values of kinematics and kinetics, or consider selected principal components of gait waveforms for analysis. These strategies may not have the capacity to capture small variations in gait patterns associated with each joint across an entire gait cycle, and may ultimately limit the accuracy of gait classification. The aim of this study was to develop an automatic feature extraction method to analyse patterns from high-dimensional autocorrelated gait waveforms. A general linear feature extraction framework was proposed and a hierarchical partial least squares method derived for discriminant analysis of multiple gait waveforms. The effectiveness of this strategy was verified using a dataset of joint angle and ground reaction force waveforms from 43 patients after TKA surgery and 31 healthy control subjects. Compared with principal component analysis and partial least squares methods, the hierarchical partial least squares method achieved generally better classification performance on all possible combinations of waveforms, with the highest classification accuracy . The novel hierarchical partial least squares method proposed is capable of capturing virtually all significant differences between TKA patients and the controls, and provides new insights into data visualization. The proposed framework presents a foundation for more rigorous classification of gait, and may ultimately be used to evaluate the effects of interventions such as surgery and rehabilitation.

  13. Evaluation of the Bitterness of Traditional Chinese Medicines using an E-Tongue Coupled with a Robust Partial Least Squares Regression Method

    PubMed Central

    Lin, Zhaozhou; Zhang, Qiao; Liu, Ruixin; Gao, Xiaojie; Zhang, Lu; Kang, Bingya; Shi, Junhan; Wu, Zidan; Gui, Xinjing; Li, Xuelin

    2016-01-01

    To accurately, safely, and efficiently evaluate the bitterness of Traditional Chinese Medicines (TCMs), a robust predictor was developed using robust partial least squares (RPLS) regression method based on data obtained from an electronic tongue (e-tongue) system. The data quality was verified by the Grubb’s test. Moreover, potential outliers were detected based on both the standardized residual and score distance calculated for each sample. The performance of RPLS on the dataset before and after outlier detection was compared to other state-of-the-art methods including multivariate linear regression, least squares support vector machine, and the plain partial least squares regression. Both R2 and root-mean-squares error (RMSE) of cross-validation (CV) were recorded for each model. With four latent variables, a robust RMSECV value of 0.3916 with bitterness values ranging from 0.63 to 4.78 were obtained for the RPLS model that was constructed based on the dataset including outliers. Meanwhile, the RMSECV, which was calculated using the models constructed by other methods, was larger than that of the RPLS model. After six outliers were excluded, the performance of all benchmark methods markedly improved, but the difference between the RPLS model constructed before and after outlier exclusion was negligible. In conclusion, the bitterness of TCM decoctions can be accurately evaluated with the RPLS model constructed using e-tongue data. PMID:26821026

  14. Enhancement of waste activated sludge dewaterability using calcium peroxide pre-oxidation and chemical re-flocculation.

    PubMed

    Chen, Zhan; Zhang, Weijun; Wang, Dongsheng; Ma, Teng; Bai, Runying; Yu, Dezhong

    2016-10-15

    The effects of combined calcium peroxide (CaO2) oxidation with chemical re-flocculation on dewatering performance and physicochemical properties of waste activated sludge was investigated in this study. The evolutions of extracellular polymeric substances (EPS) distribution, composition and morphological properties were analyzed to unravel the sludge conditioning mechanism. It was found that sludge filtration performance was enhanced by calcium peroxide oxidation with the optimal dosage of 20 mg/gTSS. However, this enhancement was not observed at lower dosages due to the absence of oxidation and the performance deteriorated at higher dosages because of the release of excess EPS, mainly as protein-like substances. The variation in soluble EPS (SEPS) component can be fitted well with pseudo-zero-order kinetic model under CaO2 treatment. At the same time, extractable EPS content (SEPS and loosely bound EPS (LB-EPS)) were dramatically increased, indicating sludge flocs were effectively broken and their structure became looser after CaO2 addition. The sludge floc structure was reconstructed and sludge dewaterability was significantly enhanced using chemical re-flocculation (polyaluminium chloride (PACl), ferric iron (FeCl3) and polyacrylamide (PAM)). The inorganic coagulants performed better in improving sludge filtration dewatering performance and reducing cake moisture content than organic polymer, since they could act as skeleton builders and decrease the sludge compressibility. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. Application of ceramic membranes with pre-ozonation for treatment of secondary wastewater effluent.

    PubMed

    Lehman, S Geno; Liu, Li

    2009-04-01

    Membrane fouling is an inevitable problem when microfiltration (MF) and ultrafiltraion (UF) are used to treat wastewater treatment plant (WWTP) effluent. While historically the use of MF/UF for water and wastewater treatment has been almost exclusively focused on polymeric membranes, new generation ceramic membranes were recently introduced in the market and they possess unique advantages over currently available polymeric membranes. Ceramic membranes are mechanically superior and are more resistant to severe chemical and thermal environments. Due to the robustness of ceramic membranes, strong oxidants such as ozone can be used as pretreatment to reduce the membrane fouling. This paper presents results of a pilot study designed to investigate the application of new generation ceramic membranes for WWTP effluent treatment. Ozonation and coagulation pretreatment were evaluated to optimize the membrane operation. The ceramic membrane demonstrated stable performance at a filtration flux of 100 gfd (170LMH) at 20 degrees C with pretreatment using PACl (1mg/L as Al) and ozone (4 mg/L). To understand the effects of ozone and coagulation pretreatment on organic foulants, natural organic matter (NOM) in four waters - raw, ozone treated, coagulation treated, and ozone followed by coagulation treated wastewaters - were characterized using high performance size exclusion chromatography (HPSEC). The HPSEC analysis demonstrated that ozone treatment is effective at degrading colloidal NOMs which are likely responsible for the majority of membrane fouling.

  16. Solution of a Complex Least Squares Problem with Constrained Phase.

    PubMed

    Bydder, Mark

    2010-12-30

    The least squares solution of a complex linear equation is in general a complex vector with independent real and imaginary parts. In certain applications in magnetic resonance imaging, a solution is desired such that each element has the same phase. A direct method for obtaining the least squares solution to the phase constrained problem is described.

  17. Three-dimensional convection of binary mixtures in porous media.

    PubMed

    Umla, R; Augustin, M; Huke, B; Lücke, M

    2011-11-01

    We investigate convection patterns of binary mixtures with a positive separation ratio in porous media. As setup, we choose the Rayleigh-Bénard system of a fluid layer heated from below. Results are obtained by a multimode Galerkin method. Using this method, we compute square and crossroll patterns, and we analyze their structural, bifurcation, and stability properties. Evidence is provided that, for a strong enough Soret effect, both structures exist as stable forms of convection. Some of their properties are found to be similar to square and crossroll convection in the system without porous medium. However, there are also qualitative differences. For example, squares can be destabilized by oscillatory perturbations with square symmetry in porous media, and their velocity isolines are deformed in the so-called Soret regime.

  18. Orthogonalizing EM: A design-based least squares algorithm.

    PubMed

    Xiong, Shifeng; Dai, Bin; Huling, Jared; Qian, Peter Z G

    We introduce an efficient iterative algorithm, intended for various least squares problems, based on a design of experiments perspective. The algorithm, called orthogonalizing EM (OEM), works for ordinary least squares and can be easily extended to penalized least squares. The main idea of the procedure is to orthogonalize a design matrix by adding new rows and then solve the original problem by embedding the augmented design in a missing data framework. We establish several attractive theoretical properties concerning OEM. For the ordinary least squares with a singular regression matrix, an OEM sequence converges to the Moore-Penrose generalized inverse-based least squares estimator. For ordinary and penalized least squares with various penalties, it converges to a point having grouping coherence for fully aliased regression matrices. Convergence and the convergence rate of the algorithm are examined. Finally, we demonstrate that OEM is highly efficient for large-scale least squares and penalized least squares problems, and is considerably faster than competing methods when n is much larger than p . Supplementary materials for this article are available online.

  19. Infant perception of the rotating Kanizsa square.

    PubMed

    Yoshino, Daisuke; Idesawa, Masanori; Kanazawa, So; Yamaguchi, Masami K

    2010-04-01

    This study examined the perception of the rotating Kanizsa square by using a fixed-trial familiarization method. If the Kanizsa square is rotated across the pacmen, adult observers perceive not only a rotating illusory square, but also an illusory expansion/contraction motion of this square. The phenomenon is called a "rotational dynamic illusion". In experiments 1 and 2, we investigated whether infants perceived the rotational dynamic illusion, finding that 3-8-month-old infants perceived the rotational dynamic illusion as a simple rotation of the Kanizsa square. In experiment 3, we investigated whether infants perceived the rotational dynamic illusion as a rotation of the Kanizsa square or as a deformation of shape, finding that 3-4-month-old infants did perceive the rotational dynamic illusion as a rotation of the Kanizsa square. Our results show that while 3-8-month-old infants perceive the rotating Kanizsa square, however, it is difficult for the infants to extract expansion/contraction motion from the rotational dynamic illusion. Copyright 2010 Elsevier Inc. All rights reserved.

  20. Stability indicating methods for the analysis of cefprozil in the presence of its alkaline induced degradation product

    NASA Astrophysics Data System (ADS)

    Attia, Khalid A. M.; Nassar, Mohammed W. I.; El-Zeiny, Mohamed B.; Serag, Ahmed

    2016-04-01

    Three simple, specific, accurate and precise spectrophotometric methods were developed for the determination of cefprozil (CZ) in the presence of its alkaline induced degradation product (DCZ). The first method was the bivariate method, while the two other multivariate methods were partial least squares (PLS) and spectral residual augmented classical least squares (SRACLS). The multivariate methods were applied with and without variable selection procedure (genetic algorithm GA). These methods were tested by analyzing laboratory prepared mixtures of the above drug with its alkaline induced degradation product and they were applied to its commercial pharmaceutical products.

  1. Photogrammetric Method and Software for Stream Planform Identification

    NASA Astrophysics Data System (ADS)

    Stonedahl, S. H.; Stonedahl, F.; Lohberg, M. M.; Lusk, K.; Miller, D.

    2013-12-01

    Accurately characterizing the planform of a stream is important for many purposes, including recording measurement and sampling locations, monitoring change due to erosion or volumetric discharge, and spatial modeling of stream processes. While expensive surveying equipment or high resolution aerial photography can be used to obtain planform data, our research focused on developing a close-range photogrammetric method (and accompanying free/open-source software) to serve as a cost-effective alternative. This method involves securing and floating a wooden square frame on the stream surface at several locations, taking photographs from numerous angles at each location, and then post-processing and merging data from these photos using the corners of the square for reference points, unit scale, and perspective correction. For our test field site we chose a ~35m reach along Black Hawk Creek in Sunderbruch Park (Davenport, IA), a small, slow-moving stream with overhanging trees. To quantify error we measured 88 distances between 30 marked control points along the reach. We calculated error by comparing these 'ground truth' distances to the corresponding distances extracted from our photogrammetric method. We placed the square at three locations along our reach and photographed it from multiple angles. The square corners, visible control points, and visible stream outline were hand-marked in these photos using the GIMP (open-source image editor). We wrote an open-source GUI in Java (hosted on GitHub), which allows the user to load marked-up photos, designate square corners and label control points. The GUI also extracts the marked pixel coordinates from the images. We also wrote several scripts (currently in MATLAB) that correct the pixel coordinates for radial distortion using Brown's lens distortion model, correct for perspective by forcing the four square corner pixels to form a parallelogram in 3-space, and rotate the points in order to correctly orient all photos of the same square location. Planform data from multiple photos (and multiple square locations) are combined using weighting functions that mitigate the error stemming from the markup-process, imperfect camera calibration, etc. We have used our (beta) software to mark and process over 100 photos, yielding an average error of only 1.5% relative to our 88 measured lengths. Next we plan to translate the MATLAB scripts into Python and release their source code, at which point only free software, consumer-grade digital cameras, and inexpensive building materials will be needed for others to replicate this method at new field sites. Three sample photographs of the square with the created planform and control points

  2. Combinatorics of least-squares trees.

    PubMed

    Mihaescu, Radu; Pachter, Lior

    2008-09-09

    A recurring theme in the least-squares approach to phylogenetics has been the discovery of elegant combinatorial formulas for the least-squares estimates of edge lengths. These formulas have proved useful for the development of efficient algorithms, and have also been important for understanding connections among popular phylogeny algorithms. For example, the selection criterion of the neighbor-joining algorithm is now understood in terms of the combinatorial formulas of Pauplin for estimating tree length. We highlight a phylogenetically desirable property that weighted least-squares methods should satisfy, and provide a complete characterization of methods that satisfy the property. The necessary and sufficient condition is a multiplicative four-point condition that the variance matrix needs to satisfy. The proof is based on the observation that the Lagrange multipliers in the proof of the Gauss-Markov theorem are tree-additive. Our results generalize and complete previous work on ordinary least squares, balanced minimum evolution, and the taxon-weighted variance model. They also provide a time-optimal algorithm for computation.

  3. A decentralized square root information filter/smoother

    NASA Technical Reports Server (NTRS)

    Bierman, G. J.; Belzer, M. R.

    1985-01-01

    A number of developments has recently led to a considerable interest in the decentralization of linear least squares estimators. The developments are partly related to the impending emergence of VLSI technology, the realization of parallel processing, and the need for algorithmic ways to speed the solution of dynamically decoupled, high dimensional estimation problems. A new method is presented for combining Square Root Information Filters (SRIF) estimates obtained from independent data sets. The new method involves an orthogonal transformation, and an information matrix filter 'homework' problem discussed by Schweppe (1973) is generalized. The employed SRIF orthogonal transformation methodology has been described by Bierman (1977).

  4. Simulation of Foam Divot Weight on External Tank Utilizing Least Squares and Neural Network Methods

    NASA Technical Reports Server (NTRS)

    Chamis, Christos C.; Coroneos, Rula M.

    2007-01-01

    Simulation of divot weight in the insulating foam, associated with the external tank of the U.S. space shuttle, has been evaluated using least squares and neural network concepts. The simulation required models based on fundamental considerations that can be used to predict under what conditions voids form, the size of the voids, and subsequent divot ejection mechanisms. The quadratic neural networks were found to be satisfactory for the simulation of foam divot weight in various tests associated with the external tank. Both linear least squares method and the nonlinear neural network predicted identical results.

  5. Theoretical study of the incompressible Navier-Stokes equations by the least-squares method

    NASA Technical Reports Server (NTRS)

    Jiang, Bo-Nan; Loh, Ching Y.; Povinelli, Louis A.

    1994-01-01

    Usually the theoretical analysis of the Navier-Stokes equations is conducted via the Galerkin method which leads to difficult saddle-point problems. This paper demonstrates that the least-squares method is a useful alternative tool for the theoretical study of partial differential equations since it leads to minimization problems which can often be treated by an elementary technique. The principal part of the Navier-Stokes equations in the first-order velocity-pressure-vorticity formulation consists of two div-curl systems, so the three-dimensional div-curl system is thoroughly studied at first. By introducing a dummy variable and by using the least-squares method, this paper shows that the div-curl system is properly determined and elliptic, and has a unique solution. The same technique then is employed to prove that the Stokes equations are properly determined and elliptic, and that four boundary conditions on a fixed boundary are required for three-dimensional problems. This paper also shows that under four combinations of non-standard boundary conditions the solution of the Stokes equations is unique. This paper emphasizes the application of the least-squares method and the div-curl method to derive a high-order version of differential equations and additional boundary conditions. In this paper, an elementary method (integration by parts) is used to prove Friedrichs' inequalities related to the div and curl operators which play an essential role in the analysis.

  6. A new algorithm for stand table projection models.

    Treesearch

    Quang V. Cao; V. Clark Baldwin

    1999-01-01

    The constrained least squares method is proposed as an algorithm for projecting stand tables through time. This method consists of three steps: (1) predict survival in each diameter class, (2) predict diameter growth, and (3) use the least squares approach to adjust the stand table to satisfy the constraints of future survival, average diameter, and stand basal area....

  7. Coupling finite element and spectral methods: First results

    NASA Technical Reports Server (NTRS)

    Bernardi, Christine; Debit, Naima; Maday, Yvon

    1987-01-01

    A Poisson equation on a rectangular domain is solved by coupling two methods: the domain is divided in two squares, a finite element approximation is used on the first square and a spectral discretization is used on the second one. Two kinds of matching conditions on the interface are presented and compared. In both cases, error estimates are proved.

  8. The Least-Squares Estimation of Latent Trait Variables.

    ERIC Educational Resources Information Center

    Tatsuoka, Kikumi

    This paper presents a new method for estimating a given latent trait variable by the least-squares approach. The beta weights are obtained recursively with the help of Fourier series and expressed as functions of item parameters of response curves. The values of the latent trait variable estimated by this method and by maximum likelihood method…

  9. Discrimination of Aurantii Fructus Immaturus and Fructus Poniciri Trifoliatae Immaturus by Flow Injection UV Spectroscopy (FIUV) and 1H NMR using Partial Least-squares Discriminant Analysis (PLS-DA)

    USDA-ARS?s Scientific Manuscript database

    Two simple fingerprinting methods, flow-injection UV spectroscopy (FIUV) and 1H nuclear magnetic resonance (NMR), for discrimination of Aurantii FructusImmaturus and Fructus Poniciri TrifoliataeImmaturususing were described. Both methods were combined with partial least-squares discriminant analysis...

  10. Evaluation of fatty proportion in fatty liver using least squares method with constraints.

    PubMed

    Li, Xingsong; Deng, Yinhui; Yu, Jinhua; Wang, Yuanyuan; Shamdasani, Vijay

    2014-01-01

    Backscatter and attenuation parameters are not easily measured in clinical applications due to tissue inhomogeneity in the region of interest (ROI). A least squares method(LSM) that fits the echo signal power spectra from a ROI to a 3-parameter tissue model was used to get attenuation coefficient imaging in fatty liver. Since fat's attenuation value is higher than normal liver parenchyma, a reasonable threshold was chosen to evaluate the fatty proportion in fatty liver. Experimental results using clinical data of fatty liver illustrate that the least squares method can get accurate attenuation estimates. It is proved that the attenuation values have a positive correlation with the fatty proportion, which can be used to evaluate the syndrome of fatty liver.

  11. Geodesic least squares regression for scaling studies in magnetic confinement fusion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Verdoolaege, Geert

    In regression analyses for deriving scaling laws that occur in various scientific disciplines, usually standard regression methods have been applied, of which ordinary least squares (OLS) is the most popular. However, concerns have been raised with respect to several assumptions underlying OLS in its application to scaling laws. We here discuss a new regression method that is robust in the presence of significant uncertainty on both the data and the regression model. The method, which we call geodesic least squares regression (GLS), is based on minimization of the Rao geodesic distance on a probabilistic manifold. We demonstrate the superiority ofmore » the method using synthetic data and we present an application to the scaling law for the power threshold for the transition to the high confinement regime in magnetic confinement fusion devices.« less

  12. Note: A novel method for generating multichannel quasi-square-wave pulses.

    PubMed

    Mao, C; Zou, X; Wang, X

    2015-08-01

    A 21-channel quasi-square-wave nanosecond pulse generator was constructed. The generator consists of a high-voltage square-wave pulser and a channel divider. Using an electromagnetic relay as a switch and a 50-Ω polyethylene cable as a pulse forming line, the high-voltage pulser produces a 10-ns square-wave pulse of 1070 V. With a specially designed resistor-cable network, the channel divider divides the high-voltage square-wave pulse into 21 identical 10-ns quasi-square-wave pulses of 51 V, exactly equal to 1070 V/21. The generator can operate not only in a simultaneous mode but also in a delay mode if the cables in the channel divider are different in length.

  13. Analysis of Lard in Lipstick Formulation Using FTIR Spectroscopy and Multivariate Calibration: A Comparison of Three Extraction Methods.

    PubMed

    Waskitho, Dri; Lukitaningsih, Endang; Sudjadi; Rohman, Abdul

    2016-01-01

    Analysis of lard extracted from lipstick formulation containing castor oil has been performed using FTIR spectroscopic method combined with multivariate calibration. Three different extraction methods were compared, namely saponification method followed by liquid/liquid extraction with hexane/dichlorometane/ethanol/water, saponification method followed by liquid/liquid extraction with dichloromethane/ethanol/water, and Bligh & Dyer method using chloroform/methanol/water as extracting solvent. Qualitative and quantitative analysis of lard were performed using principle component (PCA) and partial least square (PLS) analysis, respectively. The results showed that, in all samples prepared by the three extraction methods, PCA was capable of identifying lard at wavelength region of 1200-800 cm -1 with the best result was obtained by Bligh & Dyer method. Furthermore, PLS analysis at the same wavelength region used for qualification showed that Bligh and Dyer was the most suitable extraction method with the highest determination coefficient (R 2 ) and the lowest root mean square error of calibration (RMSEC) as well as root mean square error of prediction (RMSEP) values.

  14. Forecasting Error Calculation with Mean Absolute Deviation and Mean Absolute Percentage Error

    NASA Astrophysics Data System (ADS)

    Khair, Ummul; Fahmi, Hasanul; Hakim, Sarudin Al; Rahim, Robbi

    2017-12-01

    Prediction using a forecasting method is one of the most important things for an organization, the selection of appropriate forecasting methods is also important but the percentage error of a method is more important in order for decision makers to adopt the right culture, the use of the Mean Absolute Deviation and Mean Absolute Percentage Error to calculate the percentage of mistakes in the least square method resulted in a percentage of 9.77% and it was decided that the least square method be worked for time series and trend data.

  15. A least-squares finite element method for incompressible Navier-Stokes problems

    NASA Technical Reports Server (NTRS)

    Jiang, Bo-Nan

    1992-01-01

    A least-squares finite element method, based on the velocity-pressure-vorticity formulation, is developed for solving steady incompressible Navier-Stokes problems. This method leads to a minimization problem rather than to a saddle-point problem by the classic mixed method and can thus accommodate equal-order interpolations. This method has no parameter to tune. The associated algebraic system is symmetric, and positive definite. Numerical results for the cavity flow at Reynolds number up to 10,000 and the backward-facing step flow at Reynolds number up to 900 are presented.

  16. Development of New Methods for Predicting the Bistatic Electromagnetic Scattering from Absorbing Shapes

    DTIC Science & Technology

    1990-01-01

    least-squares sense by adding a penalty term proportional to the square of the divergence to the variational principle At the start of this project... principle required for stable solutions of the electromagnetic field: It must be possible to express the basis functions used in the finite element method as... principle to derive several different methods for computing stable solutions to electromagnetic field problems. To understand above principle , notice that

  17. Multifrequency synthesis and extraction using square wave projection patterns for quantitative tissue imaging.

    PubMed

    Nadeau, Kyle P; Rice, Tyler B; Durkin, Anthony J; Tromberg, Bruce J

    2015-11-01

    We present a method for spatial frequency domain data acquisition utilizing a multifrequency synthesis and extraction (MSE) method and binary square wave projection patterns. By illuminating a sample with square wave patterns, multiple spatial frequency components are simultaneously attenuated and can be extracted to determine optical property and depth information. Additionally, binary patterns are projected faster than sinusoids typically used in spatial frequency domain imaging (SFDI), allowing for short (millisecond or less) camera exposure times, and data acquisition speeds an order of magnitude or more greater than conventional SFDI. In cases where sensitivity to superficial layers or scattering is important, the fundamental component from higher frequency square wave patterns can be used. When probing deeper layers, the fundamental and harmonic components from lower frequency square wave patterns can be used. We compared optical property and depth penetration results extracted using square waves to those obtained using sinusoidal patterns on an in vivo human forearm and absorbing tube phantom, respectively. Absorption and reduced scattering coefficient values agree with conventional SFDI to within 1% using both high frequency (fundamental) and low frequency (fundamental and harmonic) spatial frequencies. Depth penetration reflectance values also agree to within 1% of conventional SFDI.

  18. Multifrequency synthesis and extraction using square wave projection patterns for quantitative tissue imaging

    PubMed Central

    Nadeau, Kyle P.; Rice, Tyler B.; Durkin, Anthony J.; Tromberg, Bruce J.

    2015-01-01

    Abstract. We present a method for spatial frequency domain data acquisition utilizing a multifrequency synthesis and extraction (MSE) method and binary square wave projection patterns. By illuminating a sample with square wave patterns, multiple spatial frequency components are simultaneously attenuated and can be extracted to determine optical property and depth information. Additionally, binary patterns are projected faster than sinusoids typically used in spatial frequency domain imaging (SFDI), allowing for short (millisecond or less) camera exposure times, and data acquisition speeds an order of magnitude or more greater than conventional SFDI. In cases where sensitivity to superficial layers or scattering is important, the fundamental component from higher frequency square wave patterns can be used. When probing deeper layers, the fundamental and harmonic components from lower frequency square wave patterns can be used. We compared optical property and depth penetration results extracted using square waves to those obtained using sinusoidal patterns on an in vivo human forearm and absorbing tube phantom, respectively. Absorption and reduced scattering coefficient values agree with conventional SFDI to within 1% using both high frequency (fundamental) and low frequency (fundamental and harmonic) spatial frequencies. Depth penetration reflectance values also agree to within 1% of conventional SFDI. PMID:26524682

  19. Wind Tunnel Strain-Gage Balance Calibration Data Analysis Using a Weighted Least Squares Approach

    NASA Technical Reports Server (NTRS)

    Ulbrich, N.; Volden, T.

    2017-01-01

    A new approach is presented that uses a weighted least squares fit to analyze wind tunnel strain-gage balance calibration data. The weighted least squares fit is specifically designed to increase the influence of single-component loadings during the regression analysis. The weighted least squares fit also reduces the impact of calibration load schedule asymmetries on the predicted primary sensitivities of the balance gages. A weighting factor between zero and one is assigned to each calibration data point that depends on a simple count of its intentionally loaded load components or gages. The greater the number of a data point's intentionally loaded load components or gages is, the smaller its weighting factor becomes. The proposed approach is applicable to both the Iterative and Non-Iterative Methods that are used for the analysis of strain-gage balance calibration data in the aerospace testing community. The Iterative Method uses a reasonable estimate of the tare corrected load set as input for the determination of the weighting factors. The Non-Iterative Method, on the other hand, uses gage output differences relative to the natural zeros as input for the determination of the weighting factors. Machine calibration data of a six-component force balance is used to illustrate benefits of the proposed weighted least squares fit. In addition, a detailed derivation of the PRESS residuals associated with a weighted least squares fit is given in the appendices of the paper as this information could not be found in the literature. These PRESS residuals may be needed to evaluate the predictive capabilities of the final regression models that result from a weighted least squares fit of the balance calibration data.

  20. How to deal with the high condition number of the noise covariance matrix of gravity field functionals synthesised from a satellite-only global gravity field model?

    NASA Astrophysics Data System (ADS)

    Klees, R.; Slobbe, D. C.; Farahani, H. H.

    2018-03-01

    The posed question arises for instance in regional gravity field modelling using weighted least-squares techniques if the gravity field functionals are synthesised from the spherical harmonic coefficients of a satellite-only global gravity model (GGM), and are used as one of the noisy datasets. The associated noise covariance matrix, appeared to be extremely ill-conditioned with a singular value spectrum that decayed gradually to zero without any noticeable gap. We analysed three methods to deal with the ill-conditioned noise covariance matrix: Tihonov regularisation of the noise covariance matrix in combination with the standard formula for the weighted least-squares estimator, a formula of the weighted least-squares estimator, which does not involve the inverse noise covariance matrix, and an estimator based on Rao's unified theory of least-squares. Our analysis was based on a numerical experiment involving a set of height anomalies synthesised from the GGM GOCO05s, which is provided with a full noise covariance matrix. We showed that the three estimators perform similar, provided that the two regularisation parameters each method knows were chosen properly. As standard regularisation parameter choice rules do not apply here, we suggested a new parameter choice rule, and demonstrated its performance. Using this rule, we found that the differences between the three least-squares estimates were within noise. For the standard formulation of the weighted least-squares estimator with regularised noise covariance matrix, this required an exceptionally strong regularisation, much larger than one expected from the condition number of the noise covariance matrix. The preferred method is the inversion-free formulation of the weighted least-squares estimator, because of its simplicity with respect to the choice of the two regularisation parameters.

  1. Fabrication of long linear arrays of plastic optical fibers with squared ends for the use of code mark printing lithography

    NASA Astrophysics Data System (ADS)

    Horiuchi, Toshiyuki; Watanabe, Jun; Suzuki, Yuta; Iwasaki, Jun-ya

    2017-05-01

    Two dimensional code marks are often used for the production management. In particular, in the production lines of liquid-crystal-display panels and others, data on fabrication processes such as production number and process conditions are written on each substrate or device in detail, and they are used for quality managements. For this reason, lithography system specialized in code mark printing is developed. However, conventional systems using lamp projection exposure or laser scan exposure are very expensive. Therefore, development of a low-cost exposure system using light emitting diodes (LEDs) and optical fibers with squared ends arrayed in a matrix is strongly expected. In the past research, feasibility of such a new exposure system was demonstrated using a handmade system equipped with 100 LEDs with a central wavelength of 405 nm, a 10×10 matrix of optical fibers with 1 mm square ends, and a 10X projection lens. Based on these progresses, a new method for fabricating large-scale arrays of finer fibers with squared ends was developed in this paper. At most 40 plastic optical fibers were arranged in a linear gap of an arraying instrument, and simultaneously squared by heating them on a hotplate at 120°C for 7 min. Fiber sizes were homogeneous within 496+/-4 μm. In addition, average light leak was improved from 34.4 to 21.3% by adopting the new method in place of conventional one by one squaring method. Square matrix arrays necessary for printing code marks will be obtained by piling the newly fabricated linear arrays up.

  2. Comparison of Peak-Flow Estimation Methods for Small Drainage Basins in Maine

    USGS Publications Warehouse

    Hodgkins, Glenn A.; Hebson, Charles; Lombard, Pamela J.; Mann, Alexander

    2007-01-01

    Understanding the accuracy of commonly used methods for estimating peak streamflows is important because the designs of bridges, culverts, and other river structures are based on these flows. Different methods for estimating peak streamflows were analyzed for small drainage basins in Maine. For the smallest basins, with drainage areas of 0.2 to 1.0 square mile, nine peak streamflows from actual rainfall events at four crest-stage gaging stations were modeled by the Rational Method and the Natural Resource Conservation Service TR-20 method and compared to observed peak flows. The Rational Method had a root mean square error (RMSE) of -69.7 to 230 percent (which means that approximately two thirds of the modeled flows were within -69.7 to 230 percent of the observed flows). The TR-20 method had an RMSE of -98.0 to 5,010 percent. Both the Rational Method and TR-20 underestimated the observed flows in most cases. For small basins, with drainage areas of 1.0 to 10 square miles, modeled peak flows were compared to observed statistical peak flows with return periods of 2, 50, and 100 years for 17 streams in Maine and adjoining parts of New Hampshire. Peak flows were modeled by the Rational Method, the Natural Resources Conservation Service TR-20 method, U.S. Geological Survey regression equations, and the Probabilistic Rational Method. The regression equations were the most accurate method of computing peak flows in Maine for streams with drainage areas of 1.0 to 10 square miles with an RMSE of -34.3 to 52.2 percent for 50-year peak flows. The Probabilistic Rational Method was the next most accurate method (-38.5 to 62.6 percent). The Rational Method (-56.1 to 128 percent) and particularly the TR-20 method (-76.4 to 323 percent) had much larger errors. Both the TR-20 and regression methods had similar numbers of underpredictions and overpredictions. The Rational Method overpredicted most peak flows and the Probabilistic Rational Method tended to overpredict peak flows from the smaller (less than 5 square miles) drainage basins and underpredict peak flows from larger drainage basins. The results of this study are consistent with the most comprehensive analysis of observed and modeled peak streamflows in the United States, which analyzed statistical peak flows from 70 drainage basins in the Midwest and the Northwest.

  3. Improved methods to estimate the effective impervious area in urban catchments using rainfall-runoff data

    NASA Astrophysics Data System (ADS)

    Ebrahimian, Ali; Wilson, Bruce N.; Gulliver, John S.

    2016-05-01

    Impervious surfaces are useful indicators of the urbanization impacts on water resources. Effective impervious area (EIA), which is the portion of total impervious area (TIA) that is hydraulically connected to the drainage system, is a better catchment parameter in the determination of actual urban runoff. Development of reliable methods for quantifying EIA rather than TIA is currently one of the knowledge gaps in the rainfall-runoff modeling context. The objective of this study is to improve the rainfall-runoff data analysis method for estimating EIA fraction in urban catchments by eliminating the subjective part of the existing method and by reducing the uncertainty of EIA estimates. First, the theoretical framework is generalized using a general linear least square model and using a general criterion for categorizing runoff events. Issues with the existing method that reduce the precision of the EIA fraction estimates are then identified and discussed. Two improved methods, based on ordinary least square (OLS) and weighted least square (WLS) estimates, are proposed to address these issues. The proposed weighted least squares method is then applied to eleven urban catchments in Europe, Canada, and Australia. The results are compared to map measured directly connected impervious area (DCIA) and are shown to be consistent with DCIA values. In addition, both of the improved methods are applied to nine urban catchments in Minnesota, USA. Both methods were successful in removing the subjective component inherent in the analysis of rainfall-runoff data of the current method. The WLS method is more robust than the OLS method and generates results that are different and more precise than the OLS method in the presence of heteroscedastic residuals in our rainfall-runoff data.

  4. A Comprehensive Study of Gridding Methods for GPS Horizontal Velocity Fields

    NASA Astrophysics Data System (ADS)

    Wu, Yanqiang; Jiang, Zaisen; Liu, Xiaoxia; Wei, Wenxin; Zhu, Shuang; Zhang, Long; Zou, Zhenyu; Xiong, Xiaohui; Wang, Qixin; Du, Jiliang

    2017-03-01

    Four gridding methods for GPS velocities are compared in terms of their precision, applicability and robustness by analyzing simulated data with uncertainties from 0.0 to ±3.0 mm/a. When the input data are 1° × 1° grid sampled and the uncertainty of the additional error is greater than ±1.0 mm/a, the gridding results show that the least-squares collocation method is highly robust while the robustness of the Kriging method is low. In contrast, the spherical harmonics and the multi-surface function are moderately robust, and the regional singular values for the multi-surface function method and the edge effects for the spherical harmonics method become more significant with increasing uncertainty of the input data. When the input data (with additional errors of ±2.0 mm/a) are decimated by 50% from the 1° × 1° grid data and then erased in three 6° × 12° regions, the gridding results in these three regions indicate that the least-squares collocation and the spherical harmonics methods have good performances, while the multi-surface function and the Kriging methods may lead to singular values. The gridding techniques are also applied to GPS horizontal velocities with an average error of ±0.8 mm/a over the Chinese mainland and the surrounding areas, and the results show that the least-squares collocation method has the best performance, followed by the Kriging and multi-surface function methods. Furthermore, the edge effects of the spherical harmonics method are significantly affected by the sparseness and geometric distribution of the input data. In general, the least-squares collocation method is superior in terms of its robustness, edge effect, error distribution and stability, while the other methods have several positive features.

  5. More efficient parameter estimates for factor analysis of ordinal variables by ridge generalized least squares.

    PubMed

    Yuan, Ke-Hai; Jiang, Ge; Cheng, Ying

    2017-11-01

    Data in psychology are often collected using Likert-type scales, and it has been shown that factor analysis of Likert-type data is better performed on the polychoric correlation matrix than on the product-moment covariance matrix, especially when the distributions of the observed variables are skewed. In theory, factor analysis of the polychoric correlation matrix is best conducted using generalized least squares with an asymptotically correct weight matrix (AGLS). However, simulation studies showed that both least squares (LS) and diagonally weighted least squares (DWLS) perform better than AGLS, and thus LS or DWLS is routinely used in practice. In either LS or DWLS, the associations among the polychoric correlation coefficients are completely ignored. To mend such a gap between statistical theory and empirical work, this paper proposes new methods, called ridge GLS, for factor analysis of ordinal data. Monte Carlo results show that, for a wide range of sample sizes, ridge GLS methods yield uniformly more accurate parameter estimates than existing methods (LS, DWLS, AGLS). A real-data example indicates that estimates by ridge GLS are 9-20% more efficient than those by existing methods. Rescaled and adjusted test statistics as well as sandwich-type standard errors following the ridge GLS methods also perform reasonably well. © 2017 The British Psychological Society.

  6. A Generalized Least Squares Regression Approach for Computing Effect Sizes in Single-Case Research: Application Examples

    ERIC Educational Resources Information Center

    Maggin, Daniel M.; Swaminathan, Hariharan; Rogers, Helen J.; O'Keeffe, Breda V.; Sugai, George; Horner, Robert H.

    2011-01-01

    A new method for deriving effect sizes from single-case designs is proposed. The strategy is applicable to small-sample time-series data with autoregressive errors. The method uses Generalized Least Squares (GLS) to model the autocorrelation of the data and estimate regression parameters to produce an effect size that represents the magnitude of…

  7. Kennard-Stone combined with least square support vector machine method for noncontact discriminating human blood species

    NASA Astrophysics Data System (ADS)

    Zhang, Linna; Li, Gang; Sun, Meixiu; Li, Hongxiao; Wang, Zhennan; Li, Yingxin; Lin, Ling

    2017-11-01

    Identifying whole bloods to be either human or nonhuman is an important responsibility for import-export ports and inspection and quarantine departments. Analytical methods and DNA testing methods are usually destructive. Previous studies demonstrated that visible diffuse reflectance spectroscopy method can realize noncontact human and nonhuman blood discrimination. An appropriate method for calibration set selection was very important for a robust quantitative model. In this paper, Random Selection (RS) method and Kennard-Stone (KS) method was applied in selecting samples for calibration set. Moreover, proper stoichiometry method can be greatly beneficial for improving the performance of classification model or quantification model. Partial Least Square Discrimination Analysis (PLSDA) method was commonly used in identification of blood species with spectroscopy methods. Least Square Support Vector Machine (LSSVM) was proved to be perfect for discrimination analysis. In this research, PLSDA method and LSSVM method was used for human blood discrimination. Compared with the results of PLSDA method, this method could enhance the performance of identified models. The overall results convinced that LSSVM method was more feasible for identifying human and animal blood species, and sufficiently demonstrated LSSVM method was a reliable and robust method for human blood identification, and can be more effective and accurate.

  8. The crux of the method: assumptions in ordinary least squares and logistic regression.

    PubMed

    Long, Rebecca G

    2008-10-01

    Logistic regression has increasingly become the tool of choice when analyzing data with a binary dependent variable. While resources relating to the technique are widely available, clear discussions of why logistic regression should be used in place of ordinary least squares regression are difficult to find. The current paper compares and contrasts the assumptions of ordinary least squares with those of logistic regression and explains why logistic regression's looser assumptions make it adept at handling violations of the more important assumptions in ordinary least squares.

  9. An Angular Method with Position Control for Block Mesh Squareness Improvement

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yao, J.; Stillman, D.

    We optimize a target function de ned by angular properties with a position control term for a basic stencil with a block-structured mesh, to improve element squareness in 2D and 3D. Comparison with the condition number method shows that besides a similar mesh quality regarding orthogonality can be achieved as the former does, the new method converges faster and provides a more uniform global mesh spacing in our numerical tests.

  10. Potential energy surface fitting by a statistically localized, permutationally invariant, local interpolating moving least squares method for the many-body potential: Method and application to N{sub 4}

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bender, Jason D.; Doraiswamy, Sriram; Candler, Graham V., E-mail: truhlar@umn.edu, E-mail: candler@aem.umn.edu

    2014-02-07

    Fitting potential energy surfaces to analytic forms is an important first step for efficient molecular dynamics simulations. Here, we present an improved version of the local interpolating moving least squares method (L-IMLS) for such fitting. Our method has three key improvements. First, pairwise interactions are modeled separately from many-body interactions. Second, permutational invariance is incorporated in the basis functions, using permutationally invariant polynomials in Morse variables, and in the weight functions. Third, computational cost is reduced by statistical localization, in which we statistically correlate the cutoff radius with data point density. We motivate our discussion in this paper with amore » review of global and local least-squares-based fitting methods in one dimension. Then, we develop our method in six dimensions, and we note that it allows the analytic evaluation of gradients, a feature that is important for molecular dynamics. The approach, which we call statistically localized, permutationally invariant, local interpolating moving least squares fitting of the many-body potential (SL-PI-L-IMLS-MP, or, more simply, L-IMLS-G2), is used to fit a potential energy surface to an electronic structure dataset for N{sub 4}. We discuss its performance on the dataset and give directions for further research, including applications to trajectory calculations.« less

  11. Quantitative determination of additive Chlorantraniliprole in Abamectin preparation: Investigation of bootstrapping soft shrinkage approach by mid-infrared spectroscopy

    NASA Astrophysics Data System (ADS)

    Yan, Hong; Song, Xiangzhong; Tian, Kuangda; Chen, Yilin; Xiong, Yanmei; Min, Shungeng

    2018-02-01

    A novel method, mid-infrared (MIR) spectroscopy, which enables the determination of Chlorantraniliprole in Abamectin within minutes, is proposed. We further evaluate the prediction ability of four wavelength selection methods, including bootstrapping soft shrinkage approach (BOSS), Monte Carlo uninformative variable elimination (MCUVE), genetic algorithm partial least squares (GA-PLS) and competitive adaptive reweighted sampling (CARS) respectively. The results showed that BOSS method obtained the lowest root mean squared error of cross validation (RMSECV) (0.0245) and root mean squared error of prediction (RMSEP) (0.0271), as well as the highest coefficient of determination of cross-validation (Qcv2) (0.9998) and the coefficient of determination of test set (Q2test) (0.9989), which demonstrated that the mid infrared spectroscopy can be used to detect Chlorantraniliprole in Abamectin conveniently. Meanwhile, a suitable wavelength selection method (BOSS) is essential to conducting a component spectral analysis.

  12. Demand Forecasting: An Evaluation of DODs Accuracy Metric and Navys Procedures

    DTIC Science & Technology

    2016-06-01

    inventory management improvement plan, mean of absolute scaled error, lead time adjusted squared error, forecast accuracy, benchmarking, naïve method...Manager JASA Journal of the American Statistical Association LASE Lead-time Adjusted Squared Error LCI Life Cycle Indicator MA Moving Average MAE...Mean Squared Error xvi NAVSUP Naval Supply Systems Command NDAA National Defense Authorization Act NIIN National Individual Identification Number

  13. Optical NOR logic gate design on square lattice photonic crystal platform

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    D’souza, Nirmala Maria, E-mail: nirmala@cukerala.ac.in; Mathew, Vincent, E-mail: vincent@cukerala.ac.in

    We numerically demonstrate a new configuration of all-optical NOR logic gate with square lattice photonic crystal (PhC) waveguide using finite difference time domain (FDTD) method. The logic operations are based on interference effect of optical waves. We have determined the operating frequency range by calculating the band structure for a perfectly periodic PhC using plane wave expansion (PWE) method. Response time of this logic gate is 1.98 ps and it can be operated with speed about 513 GB/s. The proposed device consists of four linear waveguides and a square ring resonator waveguides on PhC platform.

  14. Chemometrics resolution and quantification power evaluation: Application on pharmaceutical quaternary mixture of Paracetamol, Guaifenesin, Phenylephrine and p-aminophenol

    NASA Astrophysics Data System (ADS)

    Yehia, Ali M.; Mohamed, Heba M.

    2016-01-01

    Three advanced chemmometric-assisted spectrophotometric methods namely; Concentration Residuals Augmented Classical Least Squares (CRACLS), Multivariate Curve Resolution-Alternating Least Squares (MCR-ALS) and Principal Component Analysis-Artificial Neural Networks (PCA-ANN) were developed, validated and benchmarked to PLS calibration; to resolve the severely overlapped spectra and simultaneously determine; Paracetamol (PAR), Guaifenesin (GUA) and Phenylephrine (PHE) in their ternary mixture and in presence of p-aminophenol (AP) the main degradation product and synthesis impurity of Paracetamol. The analytical performance of the proposed methods was described by percentage recoveries, root mean square error of calibration and standard error of prediction. The four multivariate calibration methods could be directly used without any preliminary separation step and successfully applied for pharmaceutical formulation analysis, showing no excipients' interference.

  15. Orthogonalizing EM: A design-based least squares algorithm

    PubMed Central

    Xiong, Shifeng; Dai, Bin; Huling, Jared; Qian, Peter Z. G.

    2016-01-01

    We introduce an efficient iterative algorithm, intended for various least squares problems, based on a design of experiments perspective. The algorithm, called orthogonalizing EM (OEM), works for ordinary least squares and can be easily extended to penalized least squares. The main idea of the procedure is to orthogonalize a design matrix by adding new rows and then solve the original problem by embedding the augmented design in a missing data framework. We establish several attractive theoretical properties concerning OEM. For the ordinary least squares with a singular regression matrix, an OEM sequence converges to the Moore-Penrose generalized inverse-based least squares estimator. For ordinary and penalized least squares with various penalties, it converges to a point having grouping coherence for fully aliased regression matrices. Convergence and the convergence rate of the algorithm are examined. Finally, we demonstrate that OEM is highly efficient for large-scale least squares and penalized least squares problems, and is considerably faster than competing methods when n is much larger than p. Supplementary materials for this article are available online. PMID:27499558

  16. Salt-induced square prism Pd microtubes and their ethanol electrocatalysis properties

    NASA Astrophysics Data System (ADS)

    Jiang, Kunpeng; Ma, Shenghua; Wang, Yinan; Zhang, Ying; Han, Xiaojun

    2017-05-01

    The synthesis of square prism tubes are always challenging due to their thermo and dynamical instability. We demonstrated a simple method using Pd2+ doped PoPD oligomers as building blocks to assemble into 1D square prism metal-organic microtubes, which consists of cataphracted nanosheets on the surfaces. After high temperature treatment, the microtubes became square prism Pd tubes with a cross section size of 3 μm. The pure Pd microtubes showed excellent catalyzing activity towards the electro oxidation of ethanol. Their electrochemically active surface area is 48.2 m2 g-1, which indicates the square prism Pd tubes have great potential in the field of fuel cell.

  17. Instrumentation for Measurement of Gas Permeability of Polymeric Membranes

    NASA Technical Reports Server (NTRS)

    Upchurch, Billy T.; Wood, George M.; Brown, Kenneth G.; Burns, Karen S.

    1993-01-01

    A mass spectrometric 'Dynamic Delta' method for the measurement of gas permeability of polymeric membranes has been developed. The method is universally applicable for measurement of the permeability of any gas through polymeric membrane materials. The usual large sample size of more than 100 square centimeters required for other methods is not necessary for this new method which requires a size less than one square centimeter. The new method should fulfill requirements and find applicability for industrial materials such as food packaging, contact lenses and other commercial materials where gas permeability or permselectivity properties are important.

  18. QCL spectroscopy combined with the least squares method for substance analysis

    NASA Astrophysics Data System (ADS)

    Samsonov, D. A.; Tabalina, A. S.; Fufurin, I. L.

    2017-11-01

    The article briefly describes distinctive features of quantum cascade lasers (QCL). It also describes an experimental set-up for acquiring mid-infrared absorption spectra using QCL. The paper demonstrates experimental results in the form of normed spectra. We tested the application of the least squares method for spectrum analysis. We used this method for substance identification and extraction of concentration data. We compare the results with more common methods of absorption spectroscopy. Eventually, we prove the feasibility of using this simple method for quantitative and qualitative analysis of experimental data acquired with QCL.

  19. New model for prediction binary mixture of antihistamine decongestant using artificial neural networks and least squares support vector machine by spectrophotometry method

    NASA Astrophysics Data System (ADS)

    Mofavvaz, Shirin; Sohrabi, Mahmoud Reza; Nezamzadeh-Ejhieh, Alireza

    2017-07-01

    In the present study, artificial neural networks (ANNs) and least squares support vector machines (LS-SVM) as intelligent methods based on absorption spectra in the range of 230-300 nm have been used for determination of antihistamine decongestant contents. In the first step, one type of network (feed-forward back-propagation) from the artificial neural network with two different training algorithms, Levenberg-Marquardt (LM) and gradient descent with momentum and adaptive learning rate back-propagation (GDX) algorithm, were employed and their performance was evaluated. The performance of the LM algorithm was better than the GDX algorithm. In the second one, the radial basis network was utilized and results compared with the previous network. In the last one, the other intelligent method named least squares support vector machine was proposed to construct the antihistamine decongestant prediction model and the results were compared with two of the aforementioned networks. The values of the statistical parameters mean square error (MSE), Regression coefficient (R2), correlation coefficient (r) and also mean recovery (%), relative standard deviation (RSD) used for selecting the best model between these methods. Moreover, the proposed methods were compared to the high- performance liquid chromatography (HPLC) as a reference method. One way analysis of variance (ANOVA) test at the 95% confidence level applied to the comparison results of suggested and reference methods that there were no significant differences between them.

  20. Simultaneous spectrophotometric determination of four metals by two kinds of partial least squares methods

    NASA Astrophysics Data System (ADS)

    Gao, Ling; Ren, Shouxin

    2005-10-01

    Simultaneous determination of Ni(II), Cd(II), Cu(II) and Zn(II) was studied by two methods, kernel partial least squares (KPLS) and wavelet packet transform partial least squares (WPTPLS), with xylenol orange and cetyltrimethyl ammonium bromide as reagents in the medium pH = 9.22 borax-hydrochloric acid buffer solution. Two programs, PKPLS and PWPTPLS, were designed to perform the calculations. Data reduction was performed using kernel matrices and wavelet packet transform, respectively. In the KPLS method, the size of the kernel matrix is only dependent on the number of samples, thus the method was suitable for the data matrix with many wavelengths and fewer samples. Wavelet packet representations of signals provide a local time-frequency description, thus in the wavelet packet domain, the quality of the noise removal can be improved. In the WPTPLS by optimization, wavelet function and decomposition level were selected as Daubeches 12 and 5, respectively. Experimental results showed both methods to be successful even where there was severe overlap of spectra.

  1. Comparison of three newton-like nonlinear least-squares methods for estimating parameters of ground-water flow models

    USGS Publications Warehouse

    Cooley, R.L.; Hill, M.C.

    1992-01-01

    Three methods of solving nonlinear least-squares problems were compared for robustness and efficiency using a series of hypothetical and field problems. A modified Gauss-Newton/full Newton hybrid method (MGN/FN) and an analogous method for which part of the Hessian matrix was replaced by a quasi-Newton approximation (MGN/QN) solved some of the problems with appreciably fewer iterations than required using only a modified Gauss-Newton (MGN) method. In these problems, model nonlinearity and a large variance for the observed data apparently caused MGN to converge more slowly than MGN/FN or MGN/QN after the sum of squared errors had almost stabilized. Other problems were solved as efficiently with MGN as with MGN/FN or MGN/QN. Because MGN/FN can require significantly more computer time per iteration and more computer storage for transient problems, it is less attractive for a general purpose algorithm than MGN/QN.

  2. An O(N squared) method for computing the eigensystem of N by N symmetric tridiagonal matrices by the divide and conquer approach

    NASA Technical Reports Server (NTRS)

    Gill, Doron; Tadmor, Eitan

    1988-01-01

    An efficient method is proposed to solve the eigenproblem of N by N Symmetric Tridiagonal (ST) matrices. Unlike the standard eigensolvers which necessitate O(N cubed) operations to compute the eigenvectors of such ST matrices, the proposed method computes both the eigenvalues and eigenvectors with only O(N squared) operations. The method is based on serial implementation of the recently introduced Divide and Conquer (DC) algorithm. It exploits the fact that by O(N squared) of DC operations, one can compute the eigenvalues of N by N ST matrix and a finite number of pairs of successive rows of its eigenvector matrix. The rest of the eigenvectors--all of them or one at a time--are computed by linear three-term recurrence relations. Numerical examples are presented which demonstrate the superiority of the proposed method by saving an order of magnitude in execution time at the expense of sacrificing a few orders of accuracy.

  3. Variable selection based on clustering analysis for improvement of polyphenols prediction in green tea using synchronous fluorescence spectra

    NASA Astrophysics Data System (ADS)

    Shan, Jiajia; Wang, Xue; Zhou, Hao; Han, Shuqing; Riza, Dimas Firmanda Al; Kondo, Naoshi

    2018-04-01

    Synchronous fluorescence spectra, combined with multivariate analysis were used to predict flavonoids content in green tea rapidly and nondestructively. This paper presented a new and efficient spectral intervals selection method called clustering based partial least square (CL-PLS), which selected informative wavelengths by combining clustering concept and partial least square (PLS) methods to improve models’ performance by synchronous fluorescence spectra. The fluorescence spectra of tea samples were obtained and k-means and kohonen-self organizing map clustering algorithms were carried out to cluster full spectra into several clusters, and sub-PLS regression model was developed on each cluster. Finally, CL-PLS models consisting of gradually selected clusters were built. Correlation coefficient (R) was used to evaluate the effect on prediction performance of PLS models. In addition, variable influence on projection partial least square (VIP-PLS), selectivity ratio partial least square (SR-PLS), interval partial least square (iPLS) models and full spectra PLS model were investigated and the results were compared. The results showed that CL-PLS presented the best result for flavonoids prediction using synchronous fluorescence spectra.

  4. Variable selection based on clustering analysis for improvement of polyphenols prediction in green tea using synchronous fluorescence spectra.

    PubMed

    Shan, Jiajia; Wang, Xue; Zhou, Hao; Han, Shuqing; Riza, Dimas Firmanda Al; Kondo, Naoshi

    2018-03-13

    Synchronous fluorescence spectra, combined with multivariate analysis were used to predict flavonoids content in green tea rapidly and nondestructively. This paper presented a new and efficient spectral intervals selection method called clustering based partial least square (CL-PLS), which selected informative wavelengths by combining clustering concept and partial least square (PLS) methods to improve models' performance by synchronous fluorescence spectra. The fluorescence spectra of tea samples were obtained and k-means and kohonen-self organizing map clustering algorithms were carried out to cluster full spectra into several clusters, and sub-PLS regression model was developed on each cluster. Finally, CL-PLS models consisting of gradually selected clusters were built. Correlation coefficient (R) was used to evaluate the effect on prediction performance of PLS models. In addition, variable influence on projection partial least square (VIP-PLS), selectivity ratio partial least square (SR-PLS), interval partial least square (iPLS) models and full spectra PLS model were investigated and the results were compared. The results showed that CL-PLS presented the best result for flavonoids prediction using synchronous fluorescence spectra.

  5. Repeatability of paired counts.

    PubMed

    Alexander, Neal; Bethony, Jeff; Corrêa-Oliveira, Rodrigo; Rodrigues, Laura C; Hotez, Peter; Brooker, Simon

    2007-08-30

    The Bland and Altman technique is widely used to assess the variation between replicates of a method of clinical measurement. It yields the repeatability, i.e. the value within which 95 per cent of repeat measurements lie. The valid use of the technique requires that the variance is constant over the data range. This is not usually the case for counts of items such as CD4 cells or parasites, nor is the log transformation applicable to zero counts. We investigate the properties of generalized differences based on Box-Cox transformations. For an example, in a data set of hookworm eggs counted by the Kato-Katz method, the square root transformation is found to stabilize the variance. We show how to back-transform the repeatability on the square root scale to the repeatability of the counts themselves, as an increasing function of the square mean root egg count, i.e. the square of the average of square roots. As well as being more easily interpretable, the back-transformed results highlight the dependence of the repeatability on the sample volume used.

  6. Accuracy of maximum likelihood and least-squares estimates in the lidar slope method with noisy data.

    PubMed

    Eberhard, Wynn L

    2017-04-01

    The maximum likelihood estimator (MLE) is derived for retrieving the extinction coefficient and zero-range intercept in the lidar slope method in the presence of random and independent Gaussian noise. Least-squares fitting, weighted by the inverse of the noise variance, is equivalent to the MLE. Monte Carlo simulations demonstrate that two traditional least-squares fitting schemes, which use different weights, are less accurate. Alternative fitting schemes that have some positive attributes are introduced and evaluated. The principal factors governing accuracy of all these schemes are elucidated. Applying these schemes to data with Poisson rather than Gaussian noise alters accuracy little, even when the signal-to-noise ratio is low. Methods to estimate optimum weighting factors in actual data are presented. Even when the weighting estimates are coarse, retrieval accuracy declines only modestly. Mathematical tools are described for predicting retrieval accuracy. Least-squares fitting with inverse variance weighting has optimum accuracy for retrieval of parameters from single-wavelength lidar measurements when noise, errors, and uncertainties are Gaussian distributed, or close to optimum when only approximately Gaussian.

  7. Preliminary Solar Sail Design and Fabrication Assessment: Spinning Sail Blade, Square Sail Sheet

    NASA Technical Reports Server (NTRS)

    Daniels, J. B.; Dowdle, D. M.; Hahn, D. W.; Hildreth, E. N.; Lagerquist, D. R.; Mahagnoul, E. J.; Munson, J. B.; Origer, T. F.

    1977-01-01

    The designs and fabrication methods, equipment, facilities, economics, and schedules, for the square sail sheet alternate are evaluated. The baseline for the spinning sail blade design and related fabrication issues are assessed.

  8. Applied Algebra: The Modeling Technique of Least Squares

    ERIC Educational Resources Information Center

    Zelkowski, Jeremy; Mayes, Robert

    2008-01-01

    The article focuses on engaging students in algebra through modeling real-world problems. The technique of least squares is explored, encouraging students to develop a deeper understanding of the method. (Contains 2 figures and a bibliography.)

  9. Multifunctional Graphene-Silicone Elastomer Nanocomposite, Method of Making the Same, and Uses Thereof

    NASA Technical Reports Server (NTRS)

    Aksay, Ilhan A. (Inventor); Pan, Shuyang (Inventor); Prud'Homme, Robert K. (Inventor)

    2016-01-01

    A nanocomposite composition having a silicone elastomer matrix having therein a filler loading of greater than 0.05 weight percentage, based on total nanocomposite weight, wherein the filler is functional graphene sheets (FGS) having a surface area of from 300 square meters per gram to 2630 square meters per gram; and a method for producing the nanocomposite and uses thereof.

  10. Evaluation of multivariate calibration models with different pre-processing and processing algorithms for a novel resolution and quantitation of spectrally overlapped quaternary mixture in syrup

    NASA Astrophysics Data System (ADS)

    Moustafa, Azza A.; Hegazy, Maha A.; Mohamed, Dalia; Ali, Omnia

    2016-02-01

    A novel approach for the resolution and quantitation of severely overlapped quaternary mixture of carbinoxamine maleate (CAR), pholcodine (PHL), ephedrine hydrochloride (EPH) and sunset yellow (SUN) in syrup was demonstrated utilizing different spectrophotometric assisted multivariate calibration methods. The applied methods have used different processing and pre-processing algorithms. The proposed methods were partial least squares (PLS), concentration residuals augmented classical least squares (CRACLS), and a novel method; continuous wavelet transforms coupled with partial least squares (CWT-PLS). These methods were applied to a training set in the concentration ranges of 40-100 μg/mL, 40-160 μg/mL, 100-500 μg/mL and 8-24 μg/mL for the four components, respectively. The utilized methods have not required any preliminary separation step or chemical pretreatment. The validity of the methods was evaluated by an external validation set. The selectivity of the developed methods was demonstrated by analyzing the drugs in their combined pharmaceutical formulation without any interference from additives. The obtained results were statistically compared with the official and reported methods where no significant difference was observed regarding both accuracy and precision.

  11. Quality assessment of gasoline using comprehensive two-dimensional gas chromatography combined with unfolded partial least squares: A reliable approach for the detection of gasoline adulteration.

    PubMed

    Parastar, Hadi; Mostafapour, Sara; Azimi, Gholamhasan

    2016-01-01

    Comprehensive two-dimensional gas chromatography and flame ionization detection combined with unfolded-partial least squares is proposed as a simple, fast and reliable method to assess the quality of gasoline and to detect its potential adulterants. The data for the calibration set are first baseline corrected using a two-dimensional asymmetric least squares algorithm. The number of significant partial least squares components to build the model is determined using the minimum value of root-mean square error of leave-one out cross validation, which was 4. In this regard, blends of gasoline with kerosene, white spirit and paint thinner as frequently used adulterants are used to make calibration samples. Appropriate statistical parameters of regression coefficient of 0.996-0.998, root-mean square error of prediction of 0.005-0.010 and relative error of prediction of 1.54-3.82% for the calibration set show the reliability of the developed method. In addition, the developed method is externally validated with three samples in validation set (with a relative error of prediction below 10.0%). Finally, to test the applicability of the proposed strategy for the analysis of real samples, five real gasoline samples collected from gas stations are used for this purpose and the gasoline proportions were in range of 70-85%. Also, the relative standard deviations were below 8.5% for different samples in the prediction set. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. Determination of propranolol hydrochloride in pharmaceutical preparations using near infrared spectrometry with fiber optic probe and multivariate calibration methods.

    PubMed

    Marques Junior, Jucelino Medeiros; Muller, Aline Lima Hermes; Foletto, Edson Luiz; da Costa, Adilson Ben; Bizzi, Cezar Augusto; Irineu Muller, Edson

    2015-01-01

    A method for determination of propranolol hydrochloride in pharmaceutical preparation using near infrared spectrometry with fiber optic probe (FTNIR/PROBE) and combined with chemometric methods was developed. Calibration models were developed using two variable selection models: interval partial least squares (iPLS) and synergy interval partial least squares (siPLS). The treatments based on the mean centered data and multiplicative scatter correction (MSC) were selected for models construction. A root mean square error of prediction (RMSEP) of 8.2 mg g(-1) was achieved using siPLS (s2i20PLS) algorithm with spectra divided into 20 intervals and combination of 2 intervals (8501 to 8801 and 5201 to 5501 cm(-1)). Results obtained by the proposed method were compared with those using the pharmacopoeia reference method and significant difference was not observed. Therefore, proposed method allowed a fast, precise, and accurate determination of propranolol hydrochloride in pharmaceutical preparations. Furthermore, it is possible to carry out on-line analysis of this active principle in pharmaceutical formulations with use of fiber optic probe.

  13. A hybrid feature selection method using multiclass SVM for diagnosis of erythemato-squamous disease

    NASA Astrophysics Data System (ADS)

    Maryam, Setiawan, Noor Akhmad; Wahyunggoro, Oyas

    2017-08-01

    The diagnosis of erythemato-squamous disease is a complex problem and difficult to detect in dermatology. Besides that, it is a major cause of skin cancer. Data mining implementation in the medical field helps expert to diagnose precisely, accurately, and inexpensively. In this research, we use data mining technique to developed a diagnosis model based on multiclass SVM with a novel hybrid feature selection method to diagnose erythemato-squamous disease. Our hybrid feature selection method, named ChiGA (Chi Square and Genetic Algorithm), uses the advantages from filter and wrapper methods to select the optimal feature subset from original feature. Chi square used as filter method to remove redundant features and GA as wrapper method to select the ideal feature subset with SVM used as classifier. Experiment performed with 10 fold cross validation on erythemato-squamous diseases dataset taken from University of California Irvine (UCI) machine learning database. The experimental result shows that the proposed model based multiclass SVM with Chi Square and GA can give an optimum feature subset. There are 18 optimum features with 99.18% accuracy.

  14. A class of least-squares filtering and identification algorithms with systolic array architectures

    NASA Technical Reports Server (NTRS)

    Kalson, Seth Z.; Yao, Kung

    1991-01-01

    A unified approach is presented for deriving a large class of new and previously known time- and order-recursive least-squares algorithms with systolic array architectures, suitable for high-throughput-rate and VLSI implementations of space-time filtering and system identification problems. The geometrical derivation given is unique in that no assumption is made concerning the rank of the sample data correlation matrix. This method utilizes and extends the concept of oblique projections, as used previously in the derivations of the least-squares lattice algorithms. Exponentially weighted least-squares criteria are considered for both sliding and growing memory.

  15. Chemometrics resolution and quantification power evaluation: Application on pharmaceutical quaternary mixture of Paracetamol, Guaifenesin, Phenylephrine and p-aminophenol.

    PubMed

    Yehia, Ali M; Mohamed, Heba M

    2016-01-05

    Three advanced chemmometric-assisted spectrophotometric methods namely; Concentration Residuals Augmented Classical Least Squares (CRACLS), Multivariate Curve Resolution-Alternating Least Squares (MCR-ALS) and Principal Component Analysis-Artificial Neural Networks (PCA-ANN) were developed, validated and benchmarked to PLS calibration; to resolve the severely overlapped spectra and simultaneously determine; Paracetamol (PAR), Guaifenesin (GUA) and Phenylephrine (PHE) in their ternary mixture and in presence of p-aminophenol (AP) the main degradation product and synthesis impurity of Paracetamol. The analytical performance of the proposed methods was described by percentage recoveries, root mean square error of calibration and standard error of prediction. The four multivariate calibration methods could be directly used without any preliminary separation step and successfully applied for pharmaceutical formulation analysis, showing no excipients' interference. Copyright © 2015 Elsevier B.V. All rights reserved.

  16. Stochastic Least-Squares Petrov--Galerkin Method for Parameterized Linear Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Kookjin; Carlberg, Kevin; Elman, Howard C.

    Here, we consider the numerical solution of parameterized linear systems where the system matrix, the solution, and the right-hand side are parameterized by a set of uncertain input parameters. We explore spectral methods in which the solutions are approximated in a chosen finite-dimensional subspace. It has been shown that the stochastic Galerkin projection technique fails to minimize any measure of the solution error. As a remedy for this, we propose a novel stochatic least-squares Petrov--Galerkin (LSPG) method. The proposed method is optimal in the sense that it produces the solution that minimizes a weightedmore » $$\\ell^2$$-norm of the residual over all solutions in a given finite-dimensional subspace. Moreover, the method can be adapted to minimize the solution error in different weighted $$\\ell^2$$-norms by simply applying a weighting function within the least-squares formulation. In addition, a goal-oriented seminorm induced by an output quantity of interest can be minimized by defining a weighting function as a linear functional of the solution. We establish optimality and error bounds for the proposed method, and extensive numerical experiments show that the weighted LSPG method outperforms other spectral methods in minimizing corresponding target weighted norms.« less

  17. Weibull Modulus Estimated by the Non-linear Least Squares Method: A Solution to Deviation Occurring in Traditional Weibull Estimation

    NASA Astrophysics Data System (ADS)

    Li, T.; Griffiths, W. D.; Chen, J.

    2017-11-01

    The Maximum Likelihood method and the Linear Least Squares (LLS) method have been widely used to estimate Weibull parameters for reliability of brittle and metal materials. In the last 30 years, many researchers focused on the bias of Weibull modulus estimation, and some improvements have been achieved, especially in the case of the LLS method. However, there is a shortcoming in these methods for a specific type of data, where the lower tail deviates dramatically from the well-known linear fit in a classic LLS Weibull analysis. This deviation can be commonly found from the measured properties of materials, and previous applications of the LLS method on this kind of dataset present an unreliable linear regression. This deviation was previously thought to be due to physical flaws ( i.e., defects) contained in materials. However, this paper demonstrates that this deviation can also be caused by the linear transformation of the Weibull function, occurring in the traditional LLS method. Accordingly, it may not be appropriate to carry out a Weibull analysis according to the linearized Weibull function, and the Non-linear Least Squares method (Non-LS) is instead recommended for the Weibull modulus estimation of casting properties.

  18. SU-F-T-408: On the Determination of Equivalent Squares for Rectangular Small MV Photon Fields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sauer, OA; Wegener, S; Exner, F

    Purpose: It is common practice to tabulate dosimetric data like output factors, scatter factors and detector signal correction factors for a set of square fields. In order to get the data for an arbitrary field, it is mapped to an equivalent square, having the same scatter as the field of interest. For rectangular fields both, tabulated data and empiric formula exist. We tested the applicability of such rules for very small fields. Methods: Using the Monte-Carlo method (EGSnrc-doseRZ), the dose to a point in 10cm depth in water was calculated for cylindrical impinging fluence distributions. Radii were from 0.5mm tomore » 11.5mm with 1mm thickness of the rings. Different photon energies were investigated. With these data a matrix was constructed assigning the amount of dose to the field center to each matrix element. By summing up the elements belonging to a certain field, the dose for an arbitrary point in 10cm depth could be determined. This was done for rectangles up to 21mm side length. Comparing the dose to square field results, equivalent squares could be assigned. The results were compared to using the geometrical mean and the 4Xperimeter/area rule. Results: For side length differences less than 2mm, the difference between all methods was in general less than 0.2mm. For more elongated fields, relevant differences of more than 1mm and up to 3mm for the fields investigated occurred. The mean square side length calculated from both empiric formulas fitted much better, deviating hardly more than 1mm and for the very elongated fields only. Conclusion: For small rectangular photon fields, deviating only moderately from square both investigated empiric methods are sufficiently accurate. As the deviations often differ regarding their sign, using the mean improves the accuracy and the useable elongation range. For ratios larger than 2, Monte-Carlo generated data are recommended. SW is funded by Deutsche Forschungsgemeinschaft (SA481/10-1)« less

  19. A CLASS OF RECONSTRUCTED DISCONTINUOUS GALERKIN METHODS IN COMPUTATIONAL FLUID DYNAMICS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hong Luo; Yidong Xia; Robert Nourgaliev

    2011-05-01

    A class of reconstructed discontinuous Galerkin (DG) methods is presented to solve compressible flow problems on arbitrary grids. The idea is to combine the efficiency of the reconstruction methods in finite volume methods and the accuracy of the DG methods to obtain a better numerical algorithm in computational fluid dynamics. The beauty of the resulting reconstructed discontinuous Galerkin (RDG) methods is that they provide a unified formulation for both finite volume and DG methods, and contain both classical finite volume and standard DG methods as two special cases of the RDG methods, and thus allow for a direct efficiency comparison.more » Both Green-Gauss and least-squares reconstruction methods and a least-squares recovery method are presented to obtain a quadratic polynomial representation of the underlying linear discontinuous Galerkin solution on each cell via a so-called in-cell reconstruction process. The devised in-cell reconstruction is aimed to augment the accuracy of the discontinuous Galerkin method by increasing the order of the underlying polynomial solution. These three reconstructed discontinuous Galerkin methods are used to compute a variety of compressible flow problems on arbitrary meshes to assess their accuracy. The numerical experiments demonstrate that all three reconstructed discontinuous Galerkin methods can significantly improve the accuracy of the underlying second-order DG method, although the least-squares reconstructed DG method provides the best performance in terms of both accuracy, efficiency, and robustness.« less

  20. A survey of various enhancement techniques for square rings antennas

    NASA Astrophysics Data System (ADS)

    Mumin, Abdul Rashid O.; Alias, Rozlan; Abdullah, Jiwa; Abdulhasan, Raed Abdulkareem; Ali, Jawad; Dahlan, Samsul Haimi; Awaleh, Abdisamad A.

    2017-09-01

    The square ring shape becomes a famous reconfiguration on antenna design. The researchers have been developed the square ring by different configurations. It has high efficiency and simple calculation method. The performance enhancement for an antenna is the main reason to use this setting. Furthermore, the multi-objectives for the antenna also are considered. In this paper, different studies of square ring shape are discussed. This shape is developed in five different techniques, which are the gain enhancement, dual band antenna, reconfigurable antenna, CSRR, and circularly polarization. Moreover, the validation between these configurations also demonstrates for square ring shapes. In particular, the square ring slot improved the gain by 4.3 dB, provide dual band resonance at 1.4 and 2.6 GHz while circular polarization at 1.54 GHz, and multi-mode antenna. However, square ring strip achieved an excellent band rejection on UWB antenna at 5.5 GHz. The square ring slot length is the most influential factor on the antenna performance, which refers to the free space wavelength. Finally, comparisons between these techniques are presented.

  1. The evaluation of anthropogenic impact on the ecological stability of landscape.

    PubMed

    Michaeli, Eva; Ivanová, Monika; Koco, Štefan

    2015-01-01

    The model area is the northern surrounding of the water reservoir Zemplinska Irava in the east of Slovakia. Selection of the examined territory and the time horizons was not random. The aim was to capture the intensity level of anthropogenic impact on the values of the coefficient of ecological stability after the construction of water reservoir, Zempifnska Irava. The contribution evaluates ecological stability of landscape in the years 1956 and 2009 by GIS technology, using two methods. The first method determines the rate of ecological stability of landscape on the basis of the significance of land cover classes in the regular network of squares (the real size of the square is 0.5 square km). The second method determines the ecological stability of landscape secondary on the basis of the man influence on the landscape. A comparison of two methods has been made, as well as interpretation of the output data (e.g., monitoring the impact of marginal land cover classes with the minimal surfaces in the grid of square at the fluctuation of the index of ecological stability, respectively, it considers the possibilities to streamline the research results using homogeneous spatial units) and it also allows to track the changes in the ecological stability of the landscape in chronological development.

  2. Prediction of pH of cola beverage using Vis/NIR spectroscopy and least squares-support vector machine

    NASA Astrophysics Data System (ADS)

    Liu, Fei; He, Yong

    2008-02-01

    Visible and near infrared (Vis/NIR) transmission spectroscopy and chemometric methods were utilized to predict the pH values of cola beverages. Five varieties of cola were prepared and 225 samples (45 samples for each variety) were selected for the calibration set, while 75 samples (15 samples for each variety) for the validation set. The smoothing way of Savitzky-Golay and standard normal variate (SNV) followed by first-derivative were used as the pre-processing methods. Partial least squares (PLS) analysis was employed to extract the principal components (PCs) which were used as the inputs of least squares-support vector machine (LS-SVM) model according to their accumulative reliabilities. Then LS-SVM with radial basis function (RBF) kernel function and a two-step grid search technique were applied to build the regression model with a comparison of PLS regression. The correlation coefficient (r), root mean square error of prediction (RMSEP) and bias were 0.961, 0.040 and 0.012 for PLS, while 0.975, 0.031 and 4.697x10 -3 for LS-SVM, respectively. Both methods obtained a satisfying precision. The results indicated that Vis/NIR spectroscopy combined with chemometric methods could be applied as an alternative way for the prediction of pH of cola beverages.

  3. Least median of squares and iteratively re-weighted least squares as robust linear regression methods for fluorimetric determination of α-lipoic acid in capsules in ideal and non-ideal cases of linearity.

    PubMed

    Korany, Mohamed A; Gazy, Azza A; Khamis, Essam F; Ragab, Marwa A A; Kamal, Miranda F

    2018-06-01

    This study outlines two robust regression approaches, namely least median of squares (LMS) and iteratively re-weighted least squares (IRLS) to investigate their application in instrument analysis of nutraceuticals (that is, fluorescence quenching of merbromin reagent upon lipoic acid addition). These robust regression methods were used to calculate calibration data from the fluorescence quenching reaction (∆F and F-ratio) under ideal or non-ideal linearity conditions. For each condition, data were treated using three regression fittings: Ordinary Least Squares (OLS), LMS and IRLS. Assessment of linearity, limits of detection (LOD) and quantitation (LOQ), accuracy and precision were carefully studied for each condition. LMS and IRLS regression line fittings showed significant improvement in correlation coefficients and all regression parameters for both methods and both conditions. In the ideal linearity condition, the intercept and slope changed insignificantly, but a dramatic change was observed for the non-ideal condition and linearity intercept. Under both linearity conditions, LOD and LOQ values after the robust regression line fitting of data were lower than those obtained before data treatment. The results obtained after statistical treatment indicated that the linearity ranges for drug determination could be expanded to lower limits of quantitation by enhancing the regression equation parameters after data treatment. Analysis results for lipoic acid in capsules, using both fluorimetric methods, treated by parametric OLS and after treatment by robust LMS and IRLS were compared for both linearity conditions. Copyright © 2018 John Wiley & Sons, Ltd.

  4. Square-Wave Model for a Pendulum with Oscillating Suspension

    ERIC Educational Resources Information Center

    Yorke, Ellen D.

    1978-01-01

    Demonstrates that if a sinusoidal oscillation of the point of support of a pendulum is approximated by a square wave, a matrix method may be used to discuss parametric resonance and the stability of the inverted pendulum. (Author/SL)

  5. On the accuracy of least squares methods in the presence of corner singularities

    NASA Technical Reports Server (NTRS)

    Cox, C. L.; Fix, G. J.

    1985-01-01

    Elliptic problems with corner singularities are discussed. Finite element approximations based on variational principles of the least squares type tend to display poor convergence properties in such contexts. Moreover, mesh refinement or the use of special singular elements do not appreciably improve matters. It is shown that if the least squares formulation is done in appropriately weighted space, then optimal convergence results in unweighted spaces like L(2).

  6. Storage and computationally efficient permutations of factorized covariance and square-root information arrays

    NASA Technical Reports Server (NTRS)

    Muellerschoen, R. J.

    1988-01-01

    A unified method to permute vector stored Upper triangular Diagonal factorized covariance and vector stored upper triangular Square Root Information arrays is presented. The method involves cyclic permutation of the rows and columns of the arrays and retriangularization with fast (slow) Givens rotations (reflections). Minimal computation is performed, and a one dimensional scratch array is required. To make the method efficient for large arrays on a virtual memory machine, computations are arranged so as to avoid expensive paging faults. This method is potentially important for processing large volumes of radio metric data in the Deep Space Network.

  7. Complete band gaps of phononic crystal plates with square rods.

    PubMed

    El-Naggar, Sahar A; Mostafa, Samia I; Rafat, Nadia H

    2012-04-01

    Much of previous work has been devoted in studying complete band gaps for bulk phononic crystal (PC). In this paper, we theoretically investigate the existence and widths of these gaps for PC plates. We focus our attention on steel rods of square cross sectional area embedded in epoxy matrix. The equations for calculating the dispersion relation for square rods in a square or a triangular lattice have been derived. Our analysis is based on super cell plane wave expansion (SC-PWE) method. The influence of inclusions filling factor and plate thickness on the existence and width of the phononic band gaps has been discussed. Our calculations show that there is a certain filling factor (f=0.55) below which arrangement of square rods in a triangular lattice is superior to the arrangement in a square lattice. A comparison between square and circular cross sectional rods reveals that the former has superior normalized gap width than the latter in case of a square lattice. This situation is switched in case of a triangular lattice. Moreover, a maximum normalized gap width of 0.7 can be achieved for PC plate of square rods embedded in a square lattice and having height 90% of the lattice constant. Copyright © 2011 Elsevier B.V. All rights reserved.

  8. Nonnegative least-squares image deblurring: improved gradient projection approaches

    NASA Astrophysics Data System (ADS)

    Benvenuto, F.; Zanella, R.; Zanni, L.; Bertero, M.

    2010-02-01

    The least-squares approach to image deblurring leads to an ill-posed problem. The addition of the nonnegativity constraint, when appropriate, does not provide regularization, even if, as far as we know, a thorough investigation of the ill-posedness of the resulting constrained least-squares problem has still to be done. Iterative methods, converging to nonnegative least-squares solutions, have been proposed. Some of them have the 'semi-convergence' property, i.e. early stopping of the iteration provides 'regularized' solutions. In this paper we consider two of these methods: the projected Landweber (PL) method and the iterative image space reconstruction algorithm (ISRA). Even if they work well in many instances, they are not frequently used in practice because, in general, they require a large number of iterations before providing a sensible solution. Therefore, the main purpose of this paper is to refresh these methods by increasing their efficiency. Starting from the remark that PL and ISRA require only the computation of the gradient of the functional, we propose the application to these algorithms of special acceleration techniques that have been recently developed in the area of the gradient methods. In particular, we propose the application of efficient step-length selection rules and line-search strategies. Moreover, remarking that ISRA is a scaled gradient algorithm, we evaluate its behaviour in comparison with a recent scaled gradient projection (SGP) method for image deblurring. Numerical experiments demonstrate that the accelerated methods still exhibit the semi-convergence property, with a considerable gain both in the number of iterations and in the computational time; in particular, SGP appears definitely the most efficient one.

  9. First-Order System Least-Squares for Second-Order Elliptic Problems with Discontinuous Coefficients

    NASA Technical Reports Server (NTRS)

    Manteuffel, Thomas A.; McCormick, Stephen F.; Starke, Gerhard

    1996-01-01

    The first-order system least-squares methodology represents an alternative to standard mixed finite element methods. Among its advantages is the fact that the finite element spaces approximating the pressure and flux variables are not restricted by the inf-sup condition and that the least-squares functional itself serves as an appropriate error measure. This paper studies the first-order system least-squares approach for scalar second-order elliptic boundary value problems with discontinuous coefficients. Ellipticity of an appropriately scaled least-squares bilinear form of the size of the jumps in the coefficients leading to adequate finite element approximation results. The occurrence of singularities at interface corners and cross-points is discussed. and a weighted least-squares functional is introduced to handle such cases. Numerical experiments are presented for two test problems to illustrate the performance of this approach.

  10. Respiratory mechanics by least squares fitting in mechanically ventilated patients: application on flow-limited COPD patients.

    PubMed

    Volta, Carlo A; Marangoni, Elisabetta; Alvisi, Valentina; Capuzzo, Maurizia; Ragazzi, Riccardo; Pavanelli, Lina; Alvisi, Raffaele

    2002-01-01

    Although computerized methods of analyzing respiratory system mechanics such as the least squares fitting method have been used in various patient populations, no conclusive data are available in patients with chronic obstructive pulmonary disease (COPD), probably because they may develop expiratory flow limitation (EFL). This suggests that respiratory mechanics be determined only during inspiration. Eight-bed multidisciplinary ICU of a teaching hospital. Eight non-flow-limited postvascular surgery patients and eight flow-limited COPD patients. Patients were sedated, paralyzed for diagnostic purposes, and ventilated in volume control ventilation with constant inspiratory flow rate. Data on resistance, compliance, and dynamic intrinsic positive end-expiratory pressure (PEEPi,dyn) obtained by applying the least squares fitting method during inspiration, expiration, and the overall breathing cycle were compared with those obtained by the traditional method (constant flow, end-inspiratory occlusion method). Our results indicate that (a) the presence of EFL markedly decreases the precision of resistance and compliance values measured by the LSF method, (b) the determination of respiratory variables during inspiration allows the calculation of respiratory mechanics in flow limited COPD patients, and (c) the LSF method is able to detect the presence of PEEPi,dyn if only inspiratory data are used.

  11. Anodic Oxidation of Etodolac and its Linear Sweep, Square Wave and Differential Pulse Voltammetric Determination in Pharmaceuticals

    PubMed Central

    Yilmaz, B.; Kaban, S.; Akcay, B. K.

    2015-01-01

    In this study, simple, fast and reliable cyclic voltammetry, linear sweep voltammetry, square wave voltammetry and differential pulse voltammetry methods were developed and validated for determination of etodolac in pharmaceutical preparations. The proposed methods were based on electrochemical oxidation of etodolac at platinum electrode in acetonitrile solution containing 0.1 M lithium perchlorate. The well-defined oxidation peak was observed at 1.03 V. The calibration curves were linear for etodolac at the concentration range of 2.5-50 μg/ml for linear sweep, square wave and differential pulse voltammetry methods, respectively. Intra- and inter-day precision values for etodolac were less than 4.69, and accuracy (relative error) was better than 2.00%. The mean recovery of etodolac was 100.6% for pharmaceutical preparations. No interference was found from three tablet excipients at the selected assay conditions. Developed methods in this study are accurate, precise and can be easily applied to Etol, Tadolak and Etodin tablets as pharmaceutical preparation. PMID:26664057

  12. Multivariate fault isolation of batch processes via variable selection in partial least squares discriminant analysis.

    PubMed

    Yan, Zhengbing; Kuang, Te-Hui; Yao, Yuan

    2017-09-01

    In recent years, multivariate statistical monitoring of batch processes has become a popular research topic, wherein multivariate fault isolation is an important step aiming at the identification of the faulty variables contributing most to the detected process abnormality. Although contribution plots have been commonly used in statistical fault isolation, such methods suffer from the smearing effect between correlated variables. In particular, in batch process monitoring, the high autocorrelations and cross-correlations that exist in variable trajectories make the smearing effect unavoidable. To address such a problem, a variable selection-based fault isolation method is proposed in this research, which transforms the fault isolation problem into a variable selection problem in partial least squares discriminant analysis and solves it by calculating a sparse partial least squares model. As different from the traditional methods, the proposed method emphasizes the relative importance of each process variable. Such information may help process engineers in conducting root-cause diagnosis. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  13. Development of a non-destructive method for determining protein nitrogen in a yellow fever vaccine by near infrared spectroscopy and multivariate calibration.

    PubMed

    Dabkiewicz, Vanessa Emídio; de Mello Pereira Abrantes, Shirley; Cassella, Ricardo Jorgensen

    2018-08-05

    Near infrared spectroscopy (NIR) with diffuse reflectance associated to multivariate calibration has as main advantage the replacement of the physical separation of interferents by the mathematical separation of their signals, rapidly with no need for reagent consumption, chemical waste production or sample manipulation. Seeking to optimize quality control analyses, this spectroscopic analytical method was shown to be a viable alternative to the classical Kjeldahl method for the determination of protein nitrogen in yellow fever vaccine. The most suitable multivariate calibration was achieved by the partial least squares method (PLS) with multiplicative signal correction (MSC) treatment and data mean centering (MC), using a minimum number of latent variables (LV) equal to 1, with the lower value of the square root of the mean squared prediction error (0.00330) associated with the highest percentage value (91%) of samples. Accuracy ranged 95 to 105% recovery in the 4000-5184 cm -1 region. Copyright © 2018 Elsevier B.V. All rights reserved.

  14. Convergence and stability of the exponential Euler method for semi-linear stochastic delay differential equations.

    PubMed

    Zhang, Ling

    2017-01-01

    The main purpose of this paper is to investigate the strong convergence and exponential stability in mean square of the exponential Euler method to semi-linear stochastic delay differential equations (SLSDDEs). It is proved that the exponential Euler approximation solution converges to the analytic solution with the strong order [Formula: see text] to SLSDDEs. On the one hand, the classical stability theorem to SLSDDEs is given by the Lyapunov functions. However, in this paper we study the exponential stability in mean square of the exact solution to SLSDDEs by using the definition of logarithmic norm. On the other hand, the implicit Euler scheme to SLSDDEs is known to be exponentially stable in mean square for any step size. However, in this article we propose an explicit method to show that the exponential Euler method to SLSDDEs is proved to share the same stability for any step size by the property of logarithmic norm.

  15. Estimation of suspended-sediment rating curves and mean suspended-sediment loads

    USGS Publications Warehouse

    Crawford, Charles G.

    1991-01-01

    A simulation study was done to evaluate: (1) the accuracy and precision of parameter estimates for the bias-corrected, transformed-linear and non-linear models obtained by the method of least squares; (2) the accuracy of mean suspended-sediment loads calculated by the flow-duration, rating-curve method using model parameters obtained by the alternative methods. Parameter estimates obtained by least squares for the bias-corrected, transformed-linear model were considerably more precise than those obtained for the non-linear or weighted non-linear model. The accuracy of parameter estimates obtained for the biascorrected, transformed-linear and weighted non-linear model was similar and was much greater than the accuracy obtained by non-linear least squares. The improved parameter estimates obtained by the biascorrected, transformed-linear or weighted non-linear model yield estimates of mean suspended-sediment load calculated by the flow-duration, rating-curve method that are more accurate and precise than those obtained for the non-linear model.

  16. Load forecasting via suboptimal seasonal autoregressive models and iteratively reweighted least squares estimation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mbamalu, G.A.N.; El-Hawary, M.E.

    The authors propose suboptimal least squares or IRWLS procedures for estimating the parameters of a seasonal multiplicative AR model encountered during power system load forecasting. The proposed method involves using an interactive computer environment to estimate the parameters of a seasonal multiplicative AR process. The method comprises five major computational steps. The first determines the order of the seasonal multiplicative AR process, and the second uses the least squares or the IRWLS to estimate the optimal nonseasonal AR model parameters. In the third step one obtains the intermediate series by back forecast, which is followed by using the least squaresmore » or the IRWLS to estimate the optimal season AR parameters. The final step uses the estimated parameters to forecast future load. The method is applied to predict the Nova Scotia Power Corporation's 168 lead time hourly load. The results obtained are documented and compared with results based on the Box and Jenkins method.« less

  17. Are rapid population estimates accurate? A field trial of two different assessment methods.

    PubMed

    Grais, Rebecca F; Coulombier, Denis; Ampuero, Julia; Lucas, Marcelino E S; Barretto, Avertino T; Jacquier, Guy; Diaz, Francisco; Balandine, Serge; Mahoudeau, Claude; Brown, Vincent

    2006-09-01

    Emergencies resulting in large-scale displacement often lead to populations resettling in areas where basic health services and sanitation are unavailable. To plan relief-related activities quickly, rapid population size estimates are needed. The currently recommended Quadrat method estimates total population by extrapolating the average population size living in square blocks of known area to the total site surface. An alternative approach, the T-Square, provides a population estimate based on analysis of the spatial distribution of housing units taken throughout a site. We field tested both methods and validated the results against a census in Esturro Bairro, Beira, Mozambique. Compared to the census (population: 9,479), the T-Square yielded a better population estimate (9,523) than the Quadrat method (7,681; 95% confidence interval: 6,160-9,201), but was more difficult for field survey teams to implement. Although applicable only to similar sites, several general conclusions can be drawn for emergency planning.

  18. Characterization and Implementation of a Real-World Target Tracking Algorithm on Field Programmable Gate Arrays with Kalman Filter Test Case

    DTIC Science & Technology

    2008-03-01

    to predict its exact position. To locate Ceres, Carl Friedrich Gauss , a mere 24 years old at the time, developed a method called least-squares...dividend to produce the quotient. This method converges to the reciprocal quadratically [11]. For the special case of: 1 H × P (:, :, k)×H ′ + R (3.9) the...high-speed computation of reciprocals within the overall system. The Newton-Raphson method is also expanded for use in calculat- ing square-roots in

  19. a New Method for Calculating the Fractal Dimension of Surface Topography

    NASA Astrophysics Data System (ADS)

    Zuo, Xue; Zhu, Hua; Zhou, Yuankai; Li, Yan

    2015-06-01

    A new method termed as three-dimensional root-mean-square (3D-RMS) method, is proposed to calculate the fractal dimension (FD) of machined surfaces. The measure of this method is the root-mean-square value of surface data, and the scale is the side length of square in the projection plane. In order to evaluate the calculation accuracy of the proposed method, the isotropic surfaces with deterministic FD are generated based on the fractional Brownian function and Weierstrass-Mandelbrot (WM) fractal function, and two kinds of anisotropic surfaces are generated by stretching or rotating a WM fractal curve. Their FDs are estimated by the proposed method, as well as differential boxing-counting (DBC) method, triangular prism surface area (TPSA) method and variation method (VM). The results show that the 3D-RMS method performs better than the other methods with a lower relative error for both isotropic and anisotropic surfaces, especially for the surfaces with dimensions higher than 2.5, since the relative error between the estimated value and its theoretical value decreases with theoretical FD. Finally, the electrodeposited surface, end-turning surface and grinding surface are chosen as examples to illustrate the application of 3D-RMS method on the real machined surfaces. This method gives a new way to accurately calculate the FD from the surface topographic data.

  20. Least-squares finite element method for fluid dynamics

    NASA Technical Reports Server (NTRS)

    Jiang, Bo-Nan; Povinelli, Louis A.

    1989-01-01

    An overview is given of new developments of the least squares finite element method (LSFEM) in fluid dynamics. Special emphasis is placed on the universality of LSFEM; the symmetry and positiveness of the algebraic systems obtained from LSFEM; the accommodation of LSFEM to equal order interpolations for incompressible viscous flows; and the natural numerical dissipation of LSFEM for convective transport problems and high speed compressible flows. The performance of LSFEM is illustrated by numerical examples.

  1. Exploring the limits of cryospectroscopy: Least-squares based approaches for analyzing the self-association of HCl

    NASA Astrophysics Data System (ADS)

    De Beuckeleer, Liene I.; Herrebout, Wouter A.

    2016-02-01

    To rationalize the concentration dependent behavior observed for a large spectral data set of HCl recorded in liquid argon, least-squares based numerical methods are developed and validated. In these methods, for each wavenumber a polynomial is used to mimic the relation between monomer concentrations and measured absorbances. Least-squares fitting of higher degree polynomials tends to overfit and thus leads to compensation effects where a contribution due to one species is compensated for by a negative contribution of another. The compensation effects are corrected for by carefully analyzing, using AIC and BIC information criteria, the differences observed between consecutive fittings when the degree of the polynomial model is systematically increased, and by introducing constraints prohibiting negative absorbances to occur for the monomer or for one of the oligomers. The method developed should allow other, more complicated self-associating systems to be analyzed with a much higher accuracy than before.

  2. Peelle's pertinent puzzle using the Monte Carlo technique

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kawano, Toshihiko; Talou, Patrick; Burr, Thomas

    2009-01-01

    We try to understand the long-standing problem of the Peelle's Pertinent Puzzle (PPP) using the Monte Carlo technique. We allow the probability density functions to be any kind of form to assume the impact of distribution, and obtain the least-squares solution directly from numerical simulations. We found that the standard least squares method gives the correct answer if a weighting function is properly provided. Results from numerical simulations show that the correct answer of PPP is 1.1 {+-} 0.25 if the common error is multiplicative. The thought-provoking answer of 0.88 is also correct, if the common error is additive, andmore » if the error is proportional to the measured values. The least squares method correctly gives us the most probable case, where the additive component has a negative value. Finally, the standard method fails for PPP due to a distorted (non Gaussian) joint distribution.« less

  3. Assessment of parametric uncertainty for groundwater reactive transport modeling,

    USGS Publications Warehouse

    Shi, Xiaoqing; Ye, Ming; Curtis, Gary P.; Miller, Geoffery L.; Meyer, Philip D.; Kohler, Matthias; Yabusaki, Steve; Wu, Jichun

    2014-01-01

    The validity of using Gaussian assumptions for model residuals in uncertainty quantification of a groundwater reactive transport model was evaluated in this study. Least squares regression methods explicitly assume Gaussian residuals, and the assumption leads to Gaussian likelihood functions, model parameters, and model predictions. While the Bayesian methods do not explicitly require the Gaussian assumption, Gaussian residuals are widely used. This paper shows that the residuals of the reactive transport model are non-Gaussian, heteroscedastic, and correlated in time; characterizing them requires using a generalized likelihood function such as the formal generalized likelihood function developed by Schoups and Vrugt (2010). For the surface complexation model considered in this study for simulating uranium reactive transport in groundwater, parametric uncertainty is quantified using the least squares regression methods and Bayesian methods with both Gaussian and formal generalized likelihood functions. While the least squares methods and Bayesian methods with Gaussian likelihood function produce similar Gaussian parameter distributions, the parameter distributions of Bayesian uncertainty quantification using the formal generalized likelihood function are non-Gaussian. In addition, predictive performance of formal generalized likelihood function is superior to that of least squares regression and Bayesian methods with Gaussian likelihood function. The Bayesian uncertainty quantification is conducted using the differential evolution adaptive metropolis (DREAM(zs)) algorithm; as a Markov chain Monte Carlo (MCMC) method, it is a robust tool for quantifying uncertainty in groundwater reactive transport models. For the surface complexation model, the regression-based local sensitivity analysis and Morris- and DREAM(ZS)-based global sensitivity analysis yield almost identical ranking of parameter importance. The uncertainty analysis may help select appropriate likelihood functions, improve model calibration, and reduce predictive uncertainty in other groundwater reactive transport and environmental modeling.

  4. Advantages of soft versus hard constraints in self-modeling curve resolution problems. Alternating least squares with penalty functions.

    PubMed

    Gemperline, Paul J; Cash, Eric

    2003-08-15

    A new algorithm for self-modeling curve resolution (SMCR) that yields improved results by incorporating soft constraints is described. The method uses least squares penalty functions to implement constraints in an alternating least squares algorithm, including nonnegativity, unimodality, equality, and closure constraints. By using least squares penalty functions, soft constraints are formulated rather than hard constraints. Significant benefits are (obtained using soft constraints, especially in the form of fewer distortions due to noise in resolved profiles. Soft equality constraints can also be used to introduce incomplete or partial reference information into SMCR solutions. Four different examples demonstrating application of the new method are presented, including resolution of overlapped HPLC-DAD peaks, flow injection analysis data, and batch reaction data measured by UV/visible and near-infrared spectroscopy (NIR). Each example was selected to show one aspect of the significant advantages of soft constraints over traditionally used hard constraints. Incomplete or partial reference information into self-modeling curve resolution models is described. The method offers a substantial improvement in the ability to resolve time-dependent concentration profiles from mixture spectra recorded as a function of time.

  5. The exponential behavior and stabilizability of the stochastic magnetohydrodynamic equations

    NASA Astrophysics Data System (ADS)

    Wang, Huaqiao

    2018-06-01

    This paper studies the two-dimensional stochastic magnetohydrodynamic equations which are used to describe the turbulent flows in magnetohydrodynamics. The exponential behavior and the exponential mean square stability of the weak solutions are proved by the application of energy method. Furthermore, we establish the pathwise exponential stability by using the exponential mean square stability. When the stochastic perturbations satisfy certain additional hypotheses, we can also obtain pathwise exponential stability results without using the mean square stability.

  6. Method of measuring cross-flow vortices by use of an array of hot-film sensors

    NASA Technical Reports Server (NTRS)

    Agarwal, Aval K. (Inventor); Maddalon, Dal V. (Inventor); Mangalam, Siva M. (Inventor)

    1993-01-01

    The invention is a method for measuring the wavelength of cross-flow vortices of air flow having streamlines of flow traveling across a swept airfoil. The method comprises providing a plurality of hot-film sensors. Each hot-film sensor provides a signal which can be processed, and each hot-film sensor is spaced in a straight-line array such that the distance between successive hot-film sensors is less than the wavelength of the cross-flow vortices being measured. The method further comprises determining the direction of travel of the streamlines across the airfoil and positioning the straight-line array of hot film sensors perpendicular to the direction of travel of the streamlines, such that each sensor has a spanwise location. The method further comprises processing the signals provided by the sensors to provide root-mean-square values for each signal, plotting each root-mean-square value as a function of its spanwise location, and determining the wavelength of the cross-flow vortices by noting the distance between two maxima or two minima of root-mean-square values.

  7. Faraday rotation data analysis with least-squares elliptical fitting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    White, Adam D.; McHale, G. Brent; Goerz, David A.

    2010-10-15

    A method of analyzing Faraday rotation data from pulsed magnetic field measurements is described. The method uses direct least-squares elliptical fitting to measured data. The least-squares fit conic parameters are used to rotate, translate, and rescale the measured data. Interpretation of the transformed data provides improved accuracy and time-resolution characteristics compared with many existing methods of analyzing Faraday rotation data. The method is especially useful when linear birefringence is present at the input or output of the sensing medium, or when the relative angle of the polarizers used in analysis is not aligned with precision; under these circumstances the methodmore » is shown to return the analytically correct input signal. The method may be pertinent to other applications where analysis of Lissajous figures is required, such as the velocity interferometer system for any reflector (VISAR) diagnostics. The entire algorithm is fully automated and requires no user interaction. An example of algorithm execution is shown, using data from a fiber-based Faraday rotation sensor on a capacitive discharge experiment.« less

  8. A method of bias correction for maximal reliability with dichotomous measures.

    PubMed

    Penev, Spiridon; Raykov, Tenko

    2010-02-01

    This paper is concerned with the reliability of weighted combinations of a given set of dichotomous measures. Maximal reliability for such measures has been discussed in the past, but the pertinent estimator exhibits a considerable bias and mean squared error for moderate sample sizes. We examine this bias, propose a procedure for bias correction, and develop a more accurate asymptotic confidence interval for the resulting estimator. In most empirically relevant cases, the bias correction and mean squared error correction can be performed simultaneously. We propose an approximate (asymptotic) confidence interval for the maximal reliability coefficient, discuss the implementation of this estimator, and investigate the mean squared error of the associated asymptotic approximation. We illustrate the proposed methods using a numerical example.

  9. Fast and accurate fitting and filtering of noisy exponentials in Legendre space.

    PubMed

    Bao, Guobin; Schild, Detlev

    2014-01-01

    The parameters of experimentally obtained exponentials are usually found by least-squares fitting methods. Essentially, this is done by minimizing the mean squares sum of the differences between the data, most often a function of time, and a parameter-defined model function. Here we delineate a novel method where the noisy data are represented and analyzed in the space of Legendre polynomials. This is advantageous in several respects. First, parameter retrieval in the Legendre domain is typically two orders of magnitude faster than direct fitting in the time domain. Second, data fitting in a low-dimensional Legendre space yields estimates for amplitudes and time constants which are, on the average, more precise compared to least-squares-fitting with equal weights in the time domain. Third, the Legendre analysis of two exponentials gives satisfactory estimates in parameter ranges where least-squares-fitting in the time domain typically fails. Finally, filtering exponentials in the domain of Legendre polynomials leads to marked noise removal without the phase shift characteristic for conventional lowpass filters.

  10. Analysis of tractable distortion metrics for EEG compression applications.

    PubMed

    Bazán-Prieto, Carlos; Blanco-Velasco, Manuel; Cárdenas-Barrera, Julián; Cruz-Roldán, Fernando

    2012-07-01

    Coding distortion in lossy electroencephalographic (EEG) signal compression methods is evaluated through tractable objective criteria. The percentage root-mean-square difference, which is a global and relative indicator of the quality held by reconstructed waveforms, is the most widely used criterion. However, this parameter does not ensure compliance with clinical standard guidelines that specify limits to allowable noise in EEG recordings. As a result, expert clinicians may have difficulties interpreting the resulting distortion of the EEG for a given value of this parameter. Conversely, the root-mean-square error is an alternative criterion that quantifies distortion in understandable units. In this paper, we demonstrate that the root-mean-square error is better suited to control and to assess the distortion introduced by compression methods. The experiments conducted in this paper show that the use of the root-mean-square error as target parameter in EEG compression allows both clinicians and scientists to infer whether coding error is clinically acceptable or not at no cost for the compression ratio.

  11. Enhanced photoactivity of BiPO4/(001) facet-dominated square BiOBr flakes by combining heterojunctions with facet engineering effects

    NASA Astrophysics Data System (ADS)

    Shi, Jingzhi; Meng, Xiangying; Hao, Mengjian; Cao, Zhenzhu; He, Weiyan; Gao, Yanfang; Liu, Jinrong

    2018-02-01

    In this study, BiPO4/highly (001) facet exposed square BiOBr flake heterojunction photocatalysts with different molar ratios were fabricated via a two-step method. The synergetic effect of the heterojunction and facet engineering was systematically investigated. The physicochemical properties of the BiPO4/square BiOBr flake composites were characterized based on X-ray diffraction, field emission scanning electron microscopy, transmission electron microscopy, Brunauer-Emmett-Teller method, X-ray photoelectron spectroscopy, ultraviolet-visible diffuse reflectance spectra, photoluminescence, electrochemical impedance spectroscopy, and the photocurrent response. The BiPO4/square BiOBr flake heterojunction photocatalyst exhibited much higher photocatalytic performance compared with the individual BiPO4 and BiOBr. In particular, the BiPO4/BiOBr composite where P/Br = 1/3 exhibited the highest photocatalytic activity. The intensified separation of photoinduced charges at the p-n heterojunction between the BiPO4 nanoparticle and (001) facet of BiOBr was mainly responsible for the enhanced photoactivity.

  12. Chi-square-based scoring function for categorization of MEDLINE citations.

    PubMed

    Kastrin, A; Peterlin, B; Hristovski, D

    2010-01-01

    Text categorization has been used in biomedical informatics for identifying documents containing relevant topics of interest. We developed a simple method that uses a chi-square-based scoring function to determine the likelihood of MEDLINE citations containing genetic relevant topic. Our procedure requires construction of a genetic and a nongenetic domain document corpus. We used MeSH descriptors assigned to MEDLINE citations for this categorization task. We compared frequencies of MeSH descriptors between two corpora applying chi-square test. A MeSH descriptor was considered to be a positive indicator if its relative observed frequency in the genetic domain corpus was greater than its relative observed frequency in the nongenetic domain corpus. The output of the proposed method is a list of scores for all the citations, with the highest score given to those citations containing MeSH descriptors typical for the genetic domain. Validation was done on a set of 734 manually annotated MEDLINE citations. It achieved predictive accuracy of 0.87 with 0.69 recall and 0.64 precision. We evaluated the method by comparing it to three machine-learning algorithms (support vector machines, decision trees, naïve Bayes). Although the differences were not statistically significantly different, results showed that our chi-square scoring performs as good as compared machine-learning algorithms. We suggest that the chi-square scoring is an effective solution to help categorize MEDLINE citations. The algorithm is implemented in the BITOLA literature-based discovery support system as a preprocessor for gene symbol disambiguation process.

  13. Red square test for visual field screening. A sensitive and simple bedside test.

    PubMed

    Mandahl, A

    1994-12-01

    A reliable bedside test for screening of visual field defects is a valuable tool in the examination of patients with a putative disease affecting the sensory visual pathways. Conventional methods such as Donders' confrontation method, counting fingers in the visual field periphery, of two-hand confrontation are not sufficiently sensitive to detect minor but nevertheless serious visual field defects. More sensitive methods requiring only simple tools are also described. In this study, a test card with four red squares surrounding a fixation target, a black dot, with a total test area of about 11 x 12.5 degrees at a distance of 30 cm, was designed for testing experience of red colour saturation in four quadrants, red square test. The Goldmann visual field was used as reference. 125 consecutive patients with pituitary adenoma (159 eyes), craniopharyngeoma (9 eyes), meningeoma (21 eyes), vascular hemisphere lesion (40 eyes), hemisphere tumour (10 eyes) and hemisphere abscess (2 eyes) were examined. The Goldmann visual field and red square test were pathological in pituitary adenomas in 35%, in craniopharyngeomas in 44%, in meningeomas in 52% and in hemisphere tumours or abscess in 100% of the eyes. Among these, no false-normal or false-pathological tests were found. However, in vascular hemisphere disease the corresponding figures were Goldmann visual field 90% and red square test 85%. The 5% difference (4 eyes) was due to Goldmann visual field defects strictly peripheral to the central 15 degrees. These defects were easily diagnosed with two-hand confrontation and

  14. Determination of thiamine HCl and pyridoxine HCl in pharmaceutical preparations using UV-visible spectrophotometry and genetic algorithm based multivariate calibration methods.

    PubMed

    Ozdemir, Durmus; Dinc, Erdal

    2004-07-01

    Simultaneous determination of binary mixtures pyridoxine hydrochloride and thiamine hydrochloride in a vitamin combination using UV-visible spectrophotometry and classical least squares (CLS) and three newly developed genetic algorithm (GA) based multivariate calibration methods was demonstrated. The three genetic multivariate calibration methods are Genetic Classical Least Squares (GCLS), Genetic Inverse Least Squares (GILS) and Genetic Regression (GR). The sample data set contains the UV-visible spectra of 30 synthetic mixtures (8 to 40 microg/ml) of these vitamins and 10 tablets containing 250 mg from each vitamin. The spectra cover the range from 200 to 330 nm in 0.1 nm intervals. Several calibration models were built with the four methods for the two components. Overall, the standard error of calibration (SEC) and the standard error of prediction (SEP) for the synthetic data were in the range of <0.01 and 0.43 microg/ml for all the four methods. The SEP values for the tablets were in the range of 2.91 and 11.51 mg/tablets. A comparison of genetic algorithm selected wavelengths for each component using GR method was also included.

  15. PLS-LS-SVM based modeling of ATR-IR as a robust method in detection and qualification of alprazolam

    NASA Astrophysics Data System (ADS)

    Parhizkar, Elahehnaz; Ghazali, Mohammad; Ahmadi, Fatemeh; Sakhteman, Amirhossein

    2017-02-01

    According to the United States pharmacopeia (USP), Gold standard technique for Alprazolam determination in dosage forms is HPLC, an expensive and time-consuming method that is not easy to approach. In this study chemometrics assisted ATR-IR was introduced as an alternative method that produce similar results in fewer time and energy consumed manner. Fifty-eight samples containing different concentrations of commercial alprazolam were evaluated by HPLC and ATR-IR method. A preprocessing approach was applied to convert raw data obtained from ATR-IR spectra to normal matrix. Finally, a relationship between alprazolam concentrations achieved by HPLC and ATR-IR data was established using PLS-LS-SVM (partial least squares least squares support vector machines). Consequently, validity of the method was verified to yield a model with low error values (root mean square error of cross validation equal to 0.98). The model was able to predict about 99% of the samples according to R2 of prediction set. Response permutation test was also applied to affirm that the model was not assessed by chance correlations. At conclusion, ATR-IR can be a reliable method in manufacturing process in detection and qualification of alprazolam content.

  16. Principal components and iterative regression analysis of geophysical series: Application to Sunspot number (1750 2004)

    NASA Astrophysics Data System (ADS)

    Nordemann, D. J. R.; Rigozo, N. R.; de Souza Echer, M. P.; Echer, E.

    2008-11-01

    We present here an implementation of a least squares iterative regression method applied to the sine functions embedded in the principal components extracted from geophysical time series. This method seems to represent a useful improvement for the non-stationary time series periodicity quantitative analysis. The principal components determination followed by the least squares iterative regression method was implemented in an algorithm written in the Scilab (2006) language. The main result of the method is to obtain the set of sine functions embedded in the series analyzed in decreasing order of significance, from the most important ones, likely to represent the physical processes involved in the generation of the series, to the less important ones that represent noise components. Taking into account the need of a deeper knowledge of the Sun's past history and its implication to global climate change, the method was applied to the Sunspot Number series (1750-2004). With the threshold and parameter values used here, the application of the method leads to a total of 441 explicit sine functions, among which 65 were considered as being significant and were used for a reconstruction that gave a normalized mean squared error of 0.146.

  17. Weighted linear least squares estimation of diffusion MRI parameters: strengths, limitations, and pitfalls.

    PubMed

    Veraart, Jelle; Sijbers, Jan; Sunaert, Stefan; Leemans, Alexander; Jeurissen, Ben

    2013-11-01

    Linear least squares estimators are widely used in diffusion MRI for the estimation of diffusion parameters. Although adding proper weights is necessary to increase the precision of these linear estimators, there is no consensus on how to practically define them. In this study, the impact of the commonly used weighting strategies on the accuracy and precision of linear diffusion parameter estimators is evaluated and compared with the nonlinear least squares estimation approach. Simulation and real data experiments were done to study the performance of the weighted linear least squares estimators with weights defined by (a) the squares of the respective noisy diffusion-weighted signals; and (b) the squares of the predicted signals, which are reconstructed from a previous estimate of the diffusion model parameters. The negative effect of weighting strategy (a) on the accuracy of the estimator was surprisingly high. Multi-step weighting strategies yield better performance and, in some cases, even outperformed the nonlinear least squares estimator. If proper weighting strategies are applied, the weighted linear least squares approach shows high performance characteristics in terms of accuracy/precision and may even be preferred over nonlinear estimation methods. Copyright © 2013 Elsevier Inc. All rights reserved.

  18. Spin configurations on a decorated square lattice

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mert, Gülistan; Mert, H. Şevki

    Spin configurations on a decorated square lattice are investigated using Bertaut’s microscopic method. We have obtained collinear and non-collinear (canted) modes for the given wave vectors in the ground state. We have found ferromagnetic and antiferromagnetic commensurate spin configurations. We have found canted incommensurate spin configurations.

  19. Jammed systems of oriented needles always percolate on square lattices

    NASA Astrophysics Data System (ADS)

    Kondrat, Grzegorz; Koza, Zbigniew; Brzeski, Piotr

    2017-08-01

    Random sequential adsorption (RSA) is a standard method of modeling adsorption of large molecules at the liquid-solid interface. Several studies have recently conjectured that in the RSA of rectangular needles, or k -mers, on a square lattice, percolation is impossible if the needles are sufficiently long (k of order of several thousand). We refute these claims and present rigorous proof that in any jammed configuration of nonoverlapping, fixed-length, horizontal, or vertical needles on a square lattice, all clusters are percolating clusters.

  20. Chi-squared and C statistic minimization for low count per bin data

    NASA Astrophysics Data System (ADS)

    Nousek, John A.; Shue, David R.

    1989-07-01

    Results are presented from a computer simulation comparing two statistical fitting techniques on data samples with large and small counts per bin; the results are then related specifically to X-ray astronomy. The Marquardt and Powell minimization techniques are compared by using both to minimize the chi-squared statistic. In addition, Cash's C statistic is applied, with Powell's method, and it is shown that the C statistic produces better fits in the low-count regime than chi-squared.

  1. Chi-squared and C statistic minimization for low count per bin data. [sampling in X ray astronomy

    NASA Technical Reports Server (NTRS)

    Nousek, John A.; Shue, David R.

    1989-01-01

    Results are presented from a computer simulation comparing two statistical fitting techniques on data samples with large and small counts per bin; the results are then related specifically to X-ray astronomy. The Marquardt and Powell minimization techniques are compared by using both to minimize the chi-squared statistic. In addition, Cash's C statistic is applied, with Powell's method, and it is shown that the C statistic produces better fits in the low-count regime than chi-squared.

  2. Superresolution restoration of an image sequence: adaptive filtering approach.

    PubMed

    Elad, M; Feuer, A

    1999-01-01

    This paper presents a new method based on adaptive filtering theory for superresolution restoration of continuous image sequences. The proposed methodology suggests least squares (LS) estimators which adapt in time, based on adaptive filters, least mean squares (LMS) or recursive least squares (RLS). The adaptation enables the treatment of linear space and time-variant blurring and arbitrary motion, both of them assumed known. The proposed new approach is shown to be of relatively low computational requirements. Simulations demonstrating the superresolution restoration algorithms are presented.

  3. Method of Characteristics Calculations and Computer Code for Materials with Arbitrary Equations of State and Using Orthogonal Polynomial Least Square Surface Fits

    NASA Technical Reports Server (NTRS)

    Chang, T. S.

    1974-01-01

    A numerical scheme using the method of characteristics to calculate the flow properties and pressures behind decaying shock waves for materials under hypervelocity impact is developed. Time-consuming double interpolation subroutines are replaced by a technique based on orthogonal polynomial least square surface fits. Typical calculated results are given and compared with the double interpolation results. The complete computer program is included.

  4. A new least-squares transport equation compatible with voids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hansen, J. B.; Morel, J. E.

    2013-07-01

    We define a new least-squares transport equation that is applicable in voids, can be solved using source iteration with diffusion-synthetic acceleration, and requires only the solution of an independent set of second-order self-adjoint equations for each direction during each source iteration. We derive the equation, discretize it using the S{sub n} method in conjunction with a linear-continuous finite-element method in space, and computationally demonstrate various of its properties. (authors)

  5. Petroleomics by electrospray ionization FT-ICR mass spectrometry coupled to partial least squares with variable selection methods: prediction of the total acid number of crude oils.

    PubMed

    Terra, Luciana A; Filgueiras, Paulo R; Tose, Lílian V; Romão, Wanderson; de Souza, Douglas D; de Castro, Eustáquio V R; de Oliveira, Mirela S L; Dias, Júlio C M; Poppi, Ronei J

    2014-10-07

    Negative-ion mode electrospray ionization, ESI(-), with Fourier transform ion cyclotron resonance mass spectrometry (FT-ICR MS) was coupled to a Partial Least Squares (PLS) regression and variable selection methods to estimate the total acid number (TAN) of Brazilian crude oil samples. Generally, ESI(-)-FT-ICR mass spectra present a power of resolution of ca. 500,000 and a mass accuracy less than 1 ppm, producing a data matrix containing over 5700 variables per sample. These variables correspond to heteroatom-containing species detected as deprotonated molecules, [M - H](-) ions, which are identified primarily as naphthenic acids, phenols and carbazole analog species. The TAN values for all samples ranged from 0.06 to 3.61 mg of KOH g(-1). To facilitate the spectral interpretation, three methods of variable selection were studied: variable importance in the projection (VIP), interval partial least squares (iPLS) and elimination of uninformative variables (UVE). The UVE method seems to be more appropriate for selecting important variables, reducing the dimension of the variables to 183 and producing a root mean square error of prediction of 0.32 mg of KOH g(-1). By reducing the size of the data, it was possible to relate the selected variables with their corresponding molecular formulas, thus identifying the main chemical species responsible for the TAN values.

  6. On recursive least-squares filtering algorithms and implementations. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Hsieh, Shih-Fu

    1990-01-01

    In many real-time signal processing applications, fast and numerically stable algorithms for solving least-squares problems are necessary and important. In particular, under non-stationary conditions, these algorithms must be able to adapt themselves to reflect the changes in the system and take appropriate adjustments to achieve optimum performances. Among existing algorithms, the QR-decomposition (QRD)-based recursive least-squares (RLS) methods have been shown to be useful and effective for adaptive signal processing. In order to increase the speed of processing and achieve high throughput rate, many algorithms are being vectorized and/or pipelined to facilitate high degrees of parallelism. A time-recursive formulation of RLS filtering employing block QRD will be considered first. Several methods, including a new non-continuous windowing scheme based on selectively rejecting contaminated data, were investigated for adaptive processing. Based on systolic triarrays, many other forms of systolic arrays are shown to be capable of implementing different algorithms. Various updating and downdating systolic algorithms and architectures for RLS filtering are examined and compared in details, which include Householder reflector, Gram-Schmidt procedure, and Givens rotation. A unified approach encompassing existing square-root-free algorithms is also proposed. For the sinusoidal spectrum estimation problem, a judicious method of separating the noise from the signal is of great interest. Various truncated QR methods are proposed for this purpose and compared to the truncated SVD method. Computer simulations provided for detailed comparisons show the effectiveness of these methods. This thesis deals with fundamental issues of numerical stability, computational efficiency, adaptivity, and VLSI implementation for the RLS filtering problems. In all, various new and modified algorithms and architectures are proposed and analyzed; the significance of any of the new method depends crucially on specific application.

  7. 1.9 μm square-wave passively Q-witched mode-locked fiber laser.

    PubMed

    Ma, Wanzhuo; Wang, Tianshu; Su, Qingchao; Wang, Furen; Zhang, Jing; Wang, Chengbo; Jiang, Huilin

    2018-05-14

    We propose and demonstrate the operation of Q-switched mode-locked square-wave pulses in a thulium-holmium co-doped fiber laser. By using a nonlinear amplifying loop mirror, continuous square-wave dissipative soliton resonance pulse is obtained with 4.4 MHz repetition rate. With the increasing pump power, square-wave pulse duration can be broadened from 1.7 ns to 3.2 ns. On such basis Q-switched mode-locked operation is achieved by properly setting the pump power and the polarization controllers. The internal mode-locked pulses in Q-switched envelope still keep square-wave type. The Q-switched repetition rate can be varied from 41.6 kHz to 74 kHz by increasing pump power. The corresponding average single-pulse energy increases from 2.67 nJ to 5.2 nJ. The average peak power is also improved from 0.6 W to 1.1 W when continuous square-wave operation is changed into Q-switched mode-locked operation. It indicates that Q-switched mode-locked operation is an effective method to increase the square-wave pulse energy and peak power.

  8. Correcting bias in the rational polynomial coefficients of satellite imagery using thin-plate smoothing splines

    NASA Astrophysics Data System (ADS)

    Shen, Xiang; Liu, Bin; Li, Qing-Quan

    2017-03-01

    The Rational Function Model (RFM) has proven to be a viable alternative to the rigorous sensor models used for geo-processing of high-resolution satellite imagery. Because of various errors in the satellite ephemeris and instrument calibration, the Rational Polynomial Coefficients (RPCs) supplied by image vendors are often not sufficiently accurate, and there is therefore a clear need to correct the systematic biases in order to meet the requirements of high-precision topographic mapping. In this paper, we propose a new RPC bias-correction method using the thin-plate spline modeling technique. Benefiting from its excellent performance and high flexibility in data fitting, the thin-plate spline model has the potential to remove complex distortions in vendor-provided RPCs, such as the errors caused by short-period orbital perturbations. The performance of the new method was evaluated by using Ziyuan-3 satellite images and was compared against the recently developed least-squares collocation approach, as well as the classical affine-transformation and quadratic-polynomial based methods. The results show that the accuracies of the thin-plate spline and the least-squares collocation approaches were better than the other two methods, which indicates that strong non-rigid deformations exist in the test data because they cannot be adequately modeled by simple polynomial-based methods. The performance of the thin-plate spline method was close to that of the least-squares collocation approach when only a few Ground Control Points (GCPs) were used, and it improved more rapidly with an increase in the number of redundant observations. In the test scenario using 21 GCPs (some of them located at the four corners of the scene), the correction residuals of the thin-plate spline method were about 36%, 37%, and 19% smaller than those of the affine transformation method, the quadratic polynomial method, and the least-squares collocation algorithm, respectively, which demonstrates that the new method can be more effective at removing systematic biases in vendor-supplied RPCs.

  9. PROPOSED MODIFICATIONS OF K2-TEMPERATURE RELATION AND LEAST SQUARES ESTIMATES OF BOD (BIOCHEMICAL OXYGEN DEMAND) PARAMETERS

    EPA Science Inventory

    A technique is presented for finding the least squares estimates for the ultimate biochemical oxygen demand (BOD) and rate coefficient for the BOD reaction without resorting to complicated computer algorithms or subjective graphical methods. This may be used in stream water quali...

  10. Simplicity and Typical Rank Results for Three-Way Arrays

    ERIC Educational Resources Information Center

    ten Berge, Jos M. F.

    2011-01-01

    Matrices can be diagonalized by singular vectors or, when they are symmetric, by eigenvectors. Pairs of square matrices often admit simultaneous diagonalization, and always admit block wise simultaneous diagonalization. Generalizing these possibilities to more than two (non-square) matrices leads to methods of simplifying three-way arrays by…

  11. Why Might Relative Fit Indices Differ between Estimators?

    ERIC Educational Resources Information Center

    Weng, Li-Jen; Cheng, Chung-Ping

    1997-01-01

    Relative fit indices using the null model as the reference point in computation may differ across estimation methods, as this article illustrates by comparing maximum likelihood, ordinary least squares, and generalized least squares estimation in structural equation modeling. The illustration uses a covariance matrix for six observed variables…

  12. Method of Making Large Area Nanostructures

    NASA Technical Reports Server (NTRS)

    Marks, Alvin M.

    1995-01-01

    A method which enables the high speed formation of nanostructures on large area surfaces is described. The method uses a super sub-micron beam writer (Supersebter). The Supersebter uses a large area multi-electrode (Spindt type emitter source) to produce multiple electron beams simultaneously scanned to form a pattern on a surface in an electron beam writer. A 100,000 x 100,000 array of electron point sources, demagnified in a long electron beam writer to simultaneously produce 10 billion nano-patterns on a 1 meter squared surface by multi-electron beam impact on a 1 cm squared surface of an insulating material is proposed.

  13. Non-oscillatory and non-diffusive solution of convection problems by the iteratively reweighted least-squares finite element method

    NASA Technical Reports Server (NTRS)

    Jiang, Bo-Nan

    1993-01-01

    A comparative description is presented for the least-squares FEM (LSFEM) for 2D steady-state pure convection problems. In addition to exhibiting better control of the streamline derivative than the streamline upwinding Petrov-Galerkin method, numerical convergence rates are obtained which show the LSFEM to be virtually optimal. The LSFEM is used as a framework for an iteratively reweighted LSFEM yielding nonoscillatory and nondiffusive solutions for problems with contact discontinuities; this method is shown to convect contact discontinuities without error when using triangular and bilinear elements.

  14. 2015 RECS Square Footage Methodology

    EIA Publications

    2017-01-01

    The square footage, or size, of a home is an important characteristic in understanding its energy use. The amounts of energy used for major end uses such as space heating and air conditioning are strongly related to the size of the home. The Residential Energy Consumption Survey (RECS), conducted by the U.S. Energy Information Administration (EIA), collects information about the size of the responding housing units as part of the data collection protocol. The methods used to collect data on housing unit size produce square footage estimates that are unique to RECS because they are designed to capture the energy-consuming space within a home. This document discusses how the 2015 RECS square footage estimates were produced.

  15. Comparative assessment of orthogonal polynomials for wavefront reconstruction over the square aperture.

    PubMed

    Ye, Jingfei; Gao, Zhishan; Wang, Shuai; Cheng, Jinlong; Wang, Wei; Sun, Wenqing

    2014-10-01

    Four orthogonal polynomials for reconstructing a wavefront over a square aperture based on the modal method are currently available, namely, the 2D Chebyshev polynomials, 2D Legendre polynomials, Zernike square polynomials and Numerical polynomials. They are all orthogonal over the full unit square domain. 2D Chebyshev polynomials are defined by the product of Chebyshev polynomials in x and y variables, as are 2D Legendre polynomials. Zernike square polynomials are derived by the Gram-Schmidt orthogonalization process, where the integration region across the full unit square is circumscribed outside the unit circle. Numerical polynomials are obtained by numerical calculation. The presented study is to compare these four orthogonal polynomials by theoretical analysis and numerical experiments from the aspects of reconstruction accuracy, remaining errors, and robustness. Results show that the Numerical orthogonal polynomial is superior to the other three polynomials because of its high accuracy and robustness even in the case of a wavefront with incomplete data.

  16. Simulation-Based Approach to Determining Electron Transfer Rates Using Square-Wave Voltammetry.

    PubMed

    Dauphin-Ducharme, Philippe; Arroyo-Currás, Netzahualcóyotl; Kurnik, Martin; Ortega, Gabriel; Li, Hui; Plaxco, Kevin W

    2017-05-09

    The efficiency with which square-wave voltammetry differentiates faradic and charging currents makes it a particularly sensitive electroanalytical approach, as evidenced by its ability to measure nanomolar or even picomolar concentrations of electroactive analytes. Because of the relative complexity of the potential sweep it uses, however, the extraction of detailed kinetic and mechanistic information from square-wave data remains challenging. In response, we demonstrate here a numerical approach by which square-wave data can be used to determine electron transfer rates. Specifically, we have developed a numerical approach in which we model the height and the shape of voltammograms collected over a range of square-wave frequencies and amplitudes to simulated voltammograms as functions of the heterogeneous rate constant and the electron transfer coefficient. As validation of the approach, we have used it to determine electron transfer kinetics in both freely diffusing and diffusionless surface-tethered species, obtaining electron transfer kinetics in all cases in good agreement with values derived using non-square-wave methods.

  17. Maximum correntropy square-root cubature Kalman filter with application to SINS/GPS integrated systems.

    PubMed

    Liu, Xi; Qu, Hua; Zhao, Jihong; Yue, Pengcheng

    2018-05-31

    For a nonlinear system, the cubature Kalman filter (CKF) and its square-root version are useful methods to solve the state estimation problems, and both can obtain good performance in Gaussian noises. However, their performances often degrade significantly in the face of non-Gaussian noises, particularly when the measurements are contaminated by some heavy-tailed impulsive noises. By utilizing the maximum correntropy criterion (MCC) to improve the robust performance instead of traditional minimum mean square error (MMSE) criterion, a new square-root nonlinear filter is proposed in this study, named as the maximum correntropy square-root cubature Kalman filter (MCSCKF). The new filter not only retains the advantage of square-root cubature Kalman filter (SCKF), but also exhibits robust performance against heavy-tailed non-Gaussian noises. A judgment condition that avoids numerical problem is also given. The results of two illustrative examples, especially the SINS/GPS integrated systems, demonstrate the desirable performance of the proposed filter. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  18. A root-mean-square approach for predicting fatigue crack growth under random loading

    NASA Technical Reports Server (NTRS)

    Hudson, C. M.

    1981-01-01

    A method for predicting fatigue crack growth under random loading which employs the concept of Barsom (1976) is presented. In accordance with this method, the loading history for each specimen is analyzed to determine the root-mean-square maximum and minimum stresses, and the predictions are made by assuming the tests have been conducted under constant-amplitude loading at the root-mean-square maximum and minimum levels. The procedure requires a simple computer program and a desk-top computer. For the eleven predictions made, the ratios of the predicted lives to the test lives ranged from 2.13 to 0.82, which is a good result, considering that the normal scatter in the fatigue-crack-growth rates may range from a factor of two to four under identical loading conditions.

  19. A Christoffel function weighted least squares algorithm for collocation approximations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Narayan, Akil; Jakeman, John D.; Zhou, Tao

    Here, we propose, theoretically investigate, and numerically validate an algorithm for the Monte Carlo solution of least-squares polynomial approximation problems in a collocation framework. Our investigation is motivated by applications in the collocation approximation of parametric functions, which frequently entails construction of surrogates via orthogonal polynomials. A standard Monte Carlo approach would draw samples according to the density defining the orthogonal polynomial family. Our proposed algorithm instead samples with respect to the (weighted) pluripotential equilibrium measure of the domain, and subsequently solves a weighted least-squares problem, with weights given by evaluations of the Christoffel function. We present theoretical analysis tomore » motivate the algorithm, and numerical results that show our method is superior to standard Monte Carlo methods in many situations of interest.« less

  20. A Christoffel function weighted least squares algorithm for collocation approximations

    DOE PAGES

    Narayan, Akil; Jakeman, John D.; Zhou, Tao

    2016-11-28

    Here, we propose, theoretically investigate, and numerically validate an algorithm for the Monte Carlo solution of least-squares polynomial approximation problems in a collocation framework. Our investigation is motivated by applications in the collocation approximation of parametric functions, which frequently entails construction of surrogates via orthogonal polynomials. A standard Monte Carlo approach would draw samples according to the density defining the orthogonal polynomial family. Our proposed algorithm instead samples with respect to the (weighted) pluripotential equilibrium measure of the domain, and subsequently solves a weighted least-squares problem, with weights given by evaluations of the Christoffel function. We present theoretical analysis tomore » motivate the algorithm, and numerical results that show our method is superior to standard Monte Carlo methods in many situations of interest.« less

  1. Real-Time Adaptive Least-Squares Drag Minimization for Performance Adaptive Aeroelastic Wing

    NASA Technical Reports Server (NTRS)

    Ferrier, Yvonne L.; Nguyen, Nhan T.; Ting, Eric

    2016-01-01

    This paper contains a simulation study of a real-time adaptive least-squares drag minimization algorithm for an aeroelastic model of a flexible wing aircraft. The aircraft model is based on the NASA Generic Transport Model (GTM). The wing structures incorporate a novel aerodynamic control surface known as the Variable Camber Continuous Trailing Edge Flap (VCCTEF). The drag minimization algorithm uses the Newton-Raphson method to find the optimal VCCTEF deflections for minimum drag in the context of an altitude-hold flight control mode at cruise conditions. The aerodynamic coefficient parameters used in this optimization method are identified in real-time using Recursive Least Squares (RLS). The results demonstrate the potential of the VCCTEF to improve aerodynamic efficiency for drag minimization for transport aircraft.

  2. Method for exploiting bias in factor analysis using constrained alternating least squares algorithms

    DOEpatents

    Keenan, Michael R.

    2008-12-30

    Bias plays an important role in factor analysis and is often implicitly made use of, for example, to constrain solutions to factors that conform to physical reality. However, when components are collinear, a large range of solutions may exist that satisfy the basic constraints and fit the data equally well. In such cases, the introduction of mathematical bias through the application of constraints may select solutions that are less than optimal. The biased alternating least squares algorithm of the present invention can offset mathematical bias introduced by constraints in the standard alternating least squares analysis to achieve factor solutions that are most consistent with physical reality. In addition, these methods can be used to explicitly exploit bias to provide alternative views and provide additional insights into spectral data sets.

  3. Addressing the identification problem in age-period-cohort analysis: a tutorial on the use of partial least squares and principal components analysis.

    PubMed

    Tu, Yu-Kang; Krämer, Nicole; Lee, Wen-Chung

    2012-07-01

    In the analysis of trends in health outcomes, an ongoing issue is how to separate and estimate the effects of age, period, and cohort. As these 3 variables are perfectly collinear by definition, regression coefficients in a general linear model are not unique. In this tutorial, we review why identification is a problem, and how this problem may be tackled using partial least squares and principal components regression analyses. Both methods produce regression coefficients that fulfill the same collinearity constraint as the variables age, period, and cohort. We show that, because the constraint imposed by partial least squares and principal components regression is inherent in the mathematical relation among the 3 variables, this leads to more interpretable results. We use one dataset from a Taiwanese health-screening program to illustrate how to use partial least squares regression to analyze the trends in body heights with 3 continuous variables for age, period, and cohort. We then use another dataset of hepatocellular carcinoma mortality rates for Taiwanese men to illustrate how to use partial least squares regression to analyze tables with aggregated data. We use the second dataset to show the relation between the intrinsic estimator, a recently proposed method for the age-period-cohort analysis, and partial least squares regression. We also show that the inclusion of all indicator variables provides a more consistent approach. R code for our analyses is provided in the eAppendix.

  4. Modeling and control of non-square MIMO system using relay feedback.

    PubMed

    Kalpana, D; Thyagarajan, T; Gokulraj, N

    2015-11-01

    This paper proposes a systematic approach for the modeling and control of non-square MIMO systems in time domain using relay feedback. Conventionally, modeling, selection of the control configuration and controller design of non-square MIMO systems are performed using input/output information of direct loop, while the output of undesired responses that bears valuable information on interaction among the loops are not considered. However, in this paper, the undesired response obtained from relay feedback test is also taken into consideration to extract the information about the interaction between the loops. The studies are performed on an Air Path Scheme of Turbocharged Diesel Engine (APSTDE) model, which is a typical non-square MIMO system, with input and output variables being 3 and 2 respectively. From the relay test response, the generalized analytical expressions are derived and these analytical expressions are used to estimate unknown system parameters and also to evaluate interaction measures. The interaction is analyzed by using Block Relative Gain (BRG) method. The model thus identified is later used to design appropriate controller to carry out closed loop studies. Closed loop simulation studies were performed for both servo and regulatory operations. Integral of Squared Error (ISE) performance criterion is employed to quantitatively evaluate performance of the proposed scheme. The usefulness of the proposed method is demonstrated on a lab-scale Two-Tank Cylindrical Interacting System (TTCIS), which is configured as a non-square system. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  5. Least Squares Best Fit Method for the Three Parameter Weibull Distribution: Analysis of Tensile and Bend Specimens with Volume or Surface Flaw Failure

    NASA Technical Reports Server (NTRS)

    Gross, Bernard

    1996-01-01

    Material characterization parameters obtained from naturally flawed specimens are necessary for reliability evaluation of non-deterministic advanced ceramic structural components. The least squares best fit method is applied to the three parameter uniaxial Weibull model to obtain the material parameters from experimental tests on volume or surface flawed specimens subjected to pure tension, pure bending, four point or three point loading. Several illustrative example problems are provided.

  6. Corruption costs lives: evidence from a cross-country study.

    PubMed

    Li, Qiang; An, Lian; Xu, Jing; Baliamoune-Lutz, Mina

    2018-01-01

    This paper investigates the effect of corruption on health outcomes by using cross-country panel data covering about 150 countries for the period of 1995 to 2012. We employ ordinary least squares (OLS), fixed-effects and two-stage least squares (2SLS) estimation methods, and find that corruption significantly increases mortality rates, and reduces life expectancy and immunization rates. The results are consistent across different regions, gender, and measures of corruption. The findings suggest that reducing corruption can be an effective method to improve health outcomes.

  7. Arrhenius time-scaled least squares: a simple, robust approach to accelerated stability data analysis for bioproducts.

    PubMed

    Rauk, Adam P; Guo, Kevin; Hu, Yanling; Cahya, Suntara; Weiss, William F

    2014-08-01

    Defining a suitable product presentation with an acceptable stability profile over its intended shelf-life is one of the principal challenges in bioproduct development. Accelerated stability studies are routinely used as a tool to better understand long-term stability. Data analysis often employs an overall mass action kinetics description for the degradation and the Arrhenius relationship to capture the temperature dependence of the observed rate constant. To improve predictive accuracy and precision, the current work proposes a least-squares estimation approach with a single nonlinear covariate and uses a polynomial to describe the change in a product attribute with respect to time. The approach, which will be referred to as Arrhenius time-scaled (ATS) least squares, enables accurate, precise predictions to be achieved for degradation profiles commonly encountered during bioproduct development. A Monte Carlo study is conducted to compare the proposed approach with the common method of least-squares estimation on the logarithmic form of the Arrhenius equation and nonlinear estimation of a first-order model. The ATS least squares method accommodates a range of degradation profiles, provides a simple and intuitive approach for data presentation, and can be implemented with ease. © 2014 Wiley Periodicals, Inc. and the American Pharmacists Association.

  8. Least-Squares Regression and Spectral Residual Augmented Classical Least-Squares Chemometric Models for Stability-Indicating Analysis of Agomelatine and Its Degradation Products: A Comparative Study.

    PubMed

    Naguib, Ibrahim A; Abdelrahman, Maha M; El Ghobashy, Mohamed R; Ali, Nesma A

    2016-01-01

    Two accurate, sensitive, and selective stability-indicating methods are developed and validated for simultaneous quantitative determination of agomelatine (AGM) and its forced degradation products (Deg I and Deg II), whether in pure forms or in pharmaceutical formulations. Partial least-squares regression (PLSR) and spectral residual augmented classical least-squares (SRACLS) are two chemometric models that are being subjected to a comparative study through handling UV spectral data in range (215-350 nm). For proper analysis, a three-factor, four-level experimental design was established, resulting in a training set consisting of 16 mixtures containing different ratios of interfering species. An independent test set consisting of eight mixtures was used to validate the prediction ability of the suggested models. The results presented indicate the ability of mentioned multivariate calibration models to analyze AGM, Deg I, and Deg II with high selectivity and accuracy. The analysis results of the pharmaceutical formulations were statistically compared to the reference HPLC method, with no significant differences observed regarding accuracy and precision. The SRACLS model gives comparable results to the PLSR model; however, it keeps the qualitative spectral information of the classical least-squares algorithm for analyzed components.

  9. Spectrophotometric determination of ternary mixtures of thiamin, riboflavin and pyridoxal in pharmaceutical and human plasma by least-squares support vector machines.

    PubMed

    Niazi, Ali; Zolgharnein, Javad; Afiuni-Zadeh, Somaie

    2007-11-01

    Ternary mixtures of thiamin, riboflavin and pyridoxal have been simultaneously determined in synthetic and real samples by applications of spectrophotometric and least-squares support vector machines. The calibration graphs were linear in the ranges of 1.0 - 20.0, 1.0 - 10.0 and 1.0 - 20.0 microg ml(-1) with detection limits of 0.6, 0.5 and 0.7 microg ml(-1) for thiamin, riboflavin and pyridoxal, respectively. The experimental calibration matrix was designed with 21 mixtures of these chemicals. The concentrations were varied between calibration graph concentrations of vitamins. The simultaneous determination of these vitamin mixtures by using spectrophotometric methods is a difficult problem, due to spectral interferences. The partial least squares (PLS) modeling and least-squares support vector machines were used for the multivariate calibration of the spectrophotometric data. An excellent model was built using LS-SVM, with low prediction errors and superior performance in relation to PLS. The root mean square errors of prediction (RMSEP) for thiamin, riboflavin and pyridoxal with PLS and LS-SVM were 0.6926, 0.3755, 0.4322 and 0.0421, 0.0318, 0.0457, respectively. The proposed method was satisfactorily applied to the rapid simultaneous determination of thiamin, riboflavin and pyridoxal in commercial pharmaceutical preparations and human plasma samples.

  10. Partial least squares density modeling (PLS-DM) - a new class-modeling strategy applied to the authentication of olives in brine by near-infrared spectroscopy.

    PubMed

    Oliveri, Paolo; López, M Isabel; Casolino, M Chiara; Ruisánchez, Itziar; Callao, M Pilar; Medini, Luca; Lanteri, Silvia

    2014-12-03

    A new class-modeling method, referred to as partial least squares density modeling (PLS-DM), is presented. The method is based on partial least squares (PLS), using a distance-based sample density measurement as the response variable. Potential function probability density is subsequently calculated on PLS scores and used, jointly with residual Q statistics, to develop efficient class models. The influence of adjustable model parameters on the resulting performances has been critically studied by means of cross-validation and application of the Pareto optimality criterion. The method has been applied to verify the authenticity of olives in brine from cultivar Taggiasca, based on near-infrared (NIR) spectra recorded on homogenized solid samples. Two independent test sets were used for model validation. The final optimal model was characterized by high efficiency and equilibrate balance between sensitivity and specificity values, if compared with those obtained by application of well-established class-modeling methods, such as soft independent modeling of class analogy (SIMCA) and unequal dispersed classes (UNEQ). Copyright © 2014 Elsevier B.V. All rights reserved.

  11. Determining particle size and water content by near-infrared spectroscopy in the granulation of naproxen sodium.

    PubMed

    Bär, David; Debus, Heiko; Brzenczek, Sina; Fischer, Wolfgang; Imming, Peter

    2018-03-20

    Near-infrared spectroscopy is frequently used by the pharmaceutical industry to monitor and optimize several production processes. In combination with chemometrics, a mathematical-statistical technique, the following advantages of near-infrared spectroscopy can be applied: It is a fast, non-destructive, non-invasive, and economical analytical method. One of the most advanced and popular chemometric technique is the partial least square algorithm with its best applicability in routine and its results. The required reference analytic enables the analysis of various parameters of interest, for example, moisture content, particle size, and many others. Parameters like the correlation coefficient, root mean square error of prediction, root mean square error of calibration, and root mean square error of validation have been used for evaluating the applicability and robustness of these analytical methods developed. This study deals with investigating a Naproxen Sodium granulation process using near-infrared spectroscopy and the development of water content and particle-size methods. For the water content method, one should consider a maximum water content of about 21% in the granulation process, which must be confirmed by the loss on drying. Further influences to be considered are the constantly changing product temperature, rising to about 54 °C, the creation of hydrated states of Naproxen Sodium when using a maximum of about 21% water content, and the large quantity of about 87% Naproxen Sodium in the formulation. It was considered to use a combination of these influences in developing the near-infrared spectroscopy method for the water content of Naproxen Sodium granules. The "Root Mean Square Error" was 0.25% for calibration dataset and 0.30% for the validation dataset, which was obtained after different stages of optimization by multiplicative scatter correction and the first derivative. Using laser diffraction, the granules have been analyzed for particle sizes and obtaining the summary sieve sizes of >63 μm and >100 μm. The following influences should be considered for application in routine production: constant changes in water content up to 21% and a product temperature up to 54 °C. The different stages of optimization result in a "Root Mean Square Error" of 2.54% for the calibration data set and 3.53% for the validation set by using the Kubelka-Munk conversion and first derivative for the near-infrared spectroscopy method for a particle size >63 μm. For the near-infrared spectroscopy method using a particle size >100 μm, the "Root Mean Square Error" was 3.47% for the calibration data set and 4.51% for the validation set, while using the same pre-treatments. - The robustness and suitability of this methodology has already been demonstrated by its recent successful implementation in a routine granulate production process. Copyright © 2018 Elsevier B.V. All rights reserved.

  12. Accuracy evaluation of distance inverse square law in determining virtual electron source location in Siemens Primus linac.

    PubMed

    Douk, Hamid Shafaei; Aghamiri, Mahmoud Reza; Ghorbani, Mahdi; Farhood, Bagher; Bakhshandeh, Mohsen; Hemmati, Hamid Reza

    2018-01-01

    The aim of this study is to evaluate the accuracy of the inverse square law (ISL) method for determining location of virtual electron source ( S Vir ) in Siemens Primus linac. So far, different experimental methods have presented for determining virtual and effective electron source location such as Full Width at Half Maximum (FWHM), Multiple Coulomb Scattering (MCS), and Multi Pinhole Camera (MPC) and Inverse Square Law (ISL) methods. Among these methods, Inverse Square Law is the most common used method. Firstly, Siemens Primus linac was simulated using MCNPX Monte Carlo code. Then, by using dose profiles obtained from the Monte Carlo simulations, the location of S Vir was calculated for 5, 7, 8, 10, 12 and 14 MeV electron energies and 10 cm × 10 cm, 15 cm × 15 cm, 20 cm × 20 cm and 25 cm × 25 cm field sizes. Additionally, the location of S Vir was obtained by the ISL method for the mentioned electron energies and field sizes. Finally, the values obtained by the ISL method were compared to the values resulted from Monte Carlo simulation. The findings indicate that the calculated S Vir values depend on beam energy and field size. For a specific energy, with increase of field size, the distance of S Vir increases for most cases. Furthermore, for a special applicator, with increase of electron energy, the distance of S Vir increases for most cases. The variation of S Vir values versus change of field size in a certain energy is more than the variation of S Vir values versus change of electron energy in a certain field size. According to the results, it is concluded that the ISL method can be considered as a good method for calculation of S Vir location in higher electron energies (14 MeV).

  13. [Gaussian process regression and its application in near-infrared spectroscopy analysis].

    PubMed

    Feng, Ai-Ming; Fang, Li-Min; Lin, Min

    2011-06-01

    Gaussian process (GP) is applied in the present paper as a chemometric method to explore the complicated relationship between the near infrared (NIR) spectra and ingredients. After the outliers were detected by Monte Carlo cross validation (MCCV) method and removed from dataset, different preprocessing methods, such as multiplicative scatter correction (MSC), smoothing and derivate, were tried for the best performance of the models. Furthermore, uninformative variable elimination (UVE) was introduced as a variable selection technique and the characteristic wavelengths obtained were further employed as input for modeling. A public dataset with 80 NIR spectra of corn was introduced as an example for evaluating the new algorithm. The optimal models for oil, starch and protein were obtained by the GP regression method. The performance of the final models were evaluated according to the root mean square error of calibration (RMSEC), root mean square error of cross-validation (RMSECV), root mean square error of prediction (RMSEP) and correlation coefficient (r). The models give good calibration ability with r values above 0.99 and the prediction ability is also satisfactory with r values higher than 0.96. The overall results demonstrate that GP algorithm is an effective chemometric method and is promising for the NIR analysis.

  14. Predictive Array Design. A method for sampling combinatorial chemistry library space.

    PubMed

    Lipkin, M J; Rose, V S; Wood, J

    2002-01-01

    A method, Predictive Array Design, is presented for sampling combinatorial chemistry space and selecting a subarray for synthesis based on the experimental design method of Latin Squares. The method is appropriate for libraries with three sites of variation. Libraries with four sites of variation can be designed using the Graeco-Latin Square. Simulated annealing is used to optimise the physicochemical property profile of the sub-array. The sub-array can be used to make predictions of the activity of compounds in the all combinations array if we assume each monomer has a relatively constant contribution to activity and that the activity of a compound is composed of the sum of the activities of its constitutive monomers.

  15. Downdating a time-varying square root information filter

    NASA Technical Reports Server (NTRS)

    Muellerschoen, Ronald J.

    1990-01-01

    A new method to efficiently downdate an estimate and covariance generated by a discrete time Square Root Information Filter (SRIF) is presented. The method combines the QR factor downdating algorithm of Gill and the decentralized SRIF algorithm of Bierman. Efficient removal of either measurements or a priori information is possible without loss of numerical integrity. Moreover, the method includes features for detecting potential numerical degradation. Performance on a 300 parameter system with 5800 data points shows that the method can be used in real time and hence is a promising tool for interactive data analysis. Additionally, updating a time-varying SRIF filter with either additional measurements or a priori information proceeds analogously.

  16. Sampling for Soil Carbon Stock Assessment in Rocky Agricultural Soils

    NASA Technical Reports Server (NTRS)

    Beem-Miller, Jeffrey P.; Kong, Angela Y. Y.; Ogle, Stephen; Wolfe, David

    2016-01-01

    Coring methods commonly employed in soil organic C (SOC) stock assessment may not accurately capture soil rock fragment (RF) content or soil bulk density (rho (sub b)) in rocky agricultural soils, potentially biasing SOC stock estimates. Quantitative pits are considered less biased than coring methods but are invasive and often cost-prohibitive. We compared fixed-depth and mass-based estimates of SOC stocks (0.3-meters depth) for hammer, hydraulic push, and rotary coring methods relative to quantitative pits at four agricultural sites ranging in RF content from less than 0.01 to 0.24 cubic meters per cubic meter. Sampling costs were also compared. Coring methods significantly underestimated RF content at all rocky sites, but significant differences (p is less than 0.05) in SOC stocks between pits and corers were only found with the hammer method using the fixed-depth approach at the less than 0.01 cubic meters per cubic meter RF site (pit, 5.80 kilograms C per square meter; hammer, 4.74 kilograms C per square meter) and at the 0.14 cubic meters per cubic meter RF site (pit, 8.81 kilograms C per square meter; hammer, 6.71 kilograms C per square meter). The hammer corer also underestimated rho (sub b) at all sites as did the hydraulic push corer at the 0.21 cubic meters per cubic meter RF site. No significant differences in mass-based SOC stock estimates were observed between pits and corers. Our results indicate that (i) calculating SOC stocks on a mass basis can overcome biases in RF and rho (sub b) estimates introduced by sampling equipment and (ii) a quantitative pit is the optimal sampling method for establishing reference soil masses, followed by rotary and then hydraulic push corers.

  17. Estimation of the ARNO model baseflow parameters using daily streamflow data

    NASA Astrophysics Data System (ADS)

    Abdulla, F. A.; Lettenmaier, D. P.; Liang, Xu

    1999-09-01

    An approach is described for estimation of baseflow parameters of the ARNO model, using historical baseflow recession sequences extracted from daily streamflow records. This approach allows four of the model parameters to be estimated without rainfall data, and effectively facilitates partitioning of the parameter estimation procedure so that parsimonious search procedures can be used to estimate the remaining storm response parameters separately. Three methods of optimization are evaluated for estimation of four baseflow parameters. These methods are the downhill Simplex (S), Simulated Annealing combined with the Simplex method (SA) and Shuffled Complex Evolution (SCE). These estimation procedures are explored in conjunction with four objective functions: (1) ordinary least squares; (2) ordinary least squares with Box-Cox transformation; (3) ordinary least squares on prewhitened residuals; (4) ordinary least squares applied to prewhitened with Box-Cox transformation of residuals. The effects of changing the seed random generator for both SA and SCE methods are also explored, as are the effects of the bounds of the parameters. Although all schemes converge to the same values of the objective function, SCE method was found to be less sensitive to these issues than both the SA and the Simplex schemes. Parameter uncertainty and interactions are investigated through estimation of the variance-covariance matrix and confidence intervals. As expected the parameters were found to be correlated and the covariance matrix was found to be not diagonal. Furthermore, the linearized confidence interval theory failed for about one-fourth of the catchments while the maximum likelihood theory did not fail for any of the catchments.

  18. Multivariate methods on the excitation emission matrix fluorescence spectroscopic data of diesel-kerosene mixtures: a comparative study.

    PubMed

    Divya, O; Mishra, Ashok K

    2007-05-29

    Quantitative determination of kerosene fraction present in diesel has been carried out based on excitation emission matrix fluorescence (EEMF) along with parallel factor analysis (PARAFAC) and N-way partial least squares regression (N-PLS). EEMF is a simple, sensitive and nondestructive method suitable for the analysis of multifluorophoric mixtures. Calibration models consisting of varying compositions of diesel and kerosene were constructed and their validation was carried out using leave-one-out cross validation method. The accuracy of the model was evaluated through the root mean square error of prediction (RMSEP) for the PARAFAC, N-PLS and unfold PLS methods. N-PLS was found to be a better method compared to PARAFAC and unfold PLS method because of its low RMSEP values.

  19. Sound reproduction in personal audio systems using the least-squares approach with acoustic contrast control constraint.

    PubMed

    Cai, Yefeng; Wu, Ming; Yang, Jun

    2014-02-01

    This paper describes a method for focusing the reproduced sound in the bright zone without disturbing other people in the dark zone in personal audio systems. The proposed method combines the least-squares and acoustic contrast criteria. A constrained parameter is introduced to tune the balance between two performance indices, namely, the acoustic contrast and the spatial average error. An efficient implementation of this method using convex optimization is presented. Offline simulations and real-time experiments using a linear loudspeaker array are conducted to evaluate the performance of the presented method. Results show that compared with the traditional acoustic contrast control method, the proposed method can improve the flatness of response in the bright zone by sacrificing the level of acoustic contrast.

  20. A spectral mimetic least-squares method for the Stokes equations with no-slip boundary condition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gerritsma, Marc; Bochev, Pavel

    Formulation of locally conservative least-squares finite element methods (LSFEMs) for the Stokes equations with the no-slip boundary condition has been a long standing problem. Existing LSFEMs that yield exactly divergence free velocities require non-standard boundary conditions (Bochev and Gunzburger, 2009 [3]), while methods that admit the no-slip condition satisfy the incompressibility equation only approximately (Bochev and Gunzburger, 2009 [4, Chapter 7]). Here we address this problem by proving a new non-standard stability bound for the velocity–vorticity–pressure Stokes system augmented with a no-slip boundary condition. This bound gives rise to a norm-equivalent least-squares functional in which the velocity can be approximatedmore » by div-conforming finite element spaces, thereby enabling a locally-conservative approximations of this variable. Here, we also provide a practical realization of the new LSFEM using high-order spectral mimetic finite element spaces (Kreeft et al., 2011) and report several numerical tests, which confirm its mimetic properties.« less

  1. Application of Rapid Visco Analyser (RVA) viscograms and chemometrics for maize hardness characterisation.

    PubMed

    Guelpa, Anina; Bevilacqua, Marta; Marini, Federico; O'Kennedy, Kim; Geladi, Paul; Manley, Marena

    2015-04-15

    It has been established in this study that the Rapid Visco Analyser (RVA) can describe maize hardness, irrespective of the RVA profile, when used in association with appropriate multivariate data analysis techniques. Therefore, the RVA can complement or replace current and/or conventional methods as a hardness descriptor. Hardness modelling based on RVA viscograms was carried out using seven conventional hardness methods (hectoliter mass (HLM), hundred kernel mass (HKM), particle size index (PSI), percentage vitreous endosperm (%VE), protein content, percentage chop (%chop) and near infrared (NIR) spectroscopy) as references and three different RVA profiles (hard, soft and standard) as predictors. An approach using locally weighted partial least squares (LW-PLS) was followed to build the regression models. The resulted prediction errors (root mean square error of cross-validation (RMSECV) and root mean square error of prediction (RMSEP)) for the quantification of hardness values were always lower or in the same order of the laboratory error of the reference method. Copyright © 2014 Elsevier Ltd. All rights reserved.

  2. Exploring the limits of cryospectroscopy: Least-squares based approaches for analyzing the self-association of HCl.

    PubMed

    De Beuckeleer, Liene I; Herrebout, Wouter A

    2016-02-05

    To rationalize the concentration dependent behavior observed for a large spectral data set of HCl recorded in liquid argon, least-squares based numerical methods are developed and validated. In these methods, for each wavenumber a polynomial is used to mimic the relation between monomer concentrations and measured absorbances. Least-squares fitting of higher degree polynomials tends to overfit and thus leads to compensation effects where a contribution due to one species is compensated for by a negative contribution of another. The compensation effects are corrected for by carefully analyzing, using AIC and BIC information criteria, the differences observed between consecutive fittings when the degree of the polynomial model is systematically increased, and by introducing constraints prohibiting negative absorbances to occur for the monomer or for one of the oligomers. The method developed should allow other, more complicated self-associating systems to be analyzed with a much higher accuracy than before. Copyright © 2015 Elsevier B.V. All rights reserved.

  3. A spectral mimetic least-squares method for the Stokes equations with no-slip boundary condition

    DOE PAGES

    Gerritsma, Marc; Bochev, Pavel

    2016-03-22

    Formulation of locally conservative least-squares finite element methods (LSFEMs) for the Stokes equations with the no-slip boundary condition has been a long standing problem. Existing LSFEMs that yield exactly divergence free velocities require non-standard boundary conditions (Bochev and Gunzburger, 2009 [3]), while methods that admit the no-slip condition satisfy the incompressibility equation only approximately (Bochev and Gunzburger, 2009 [4, Chapter 7]). Here we address this problem by proving a new non-standard stability bound for the velocity–vorticity–pressure Stokes system augmented with a no-slip boundary condition. This bound gives rise to a norm-equivalent least-squares functional in which the velocity can be approximatedmore » by div-conforming finite element spaces, thereby enabling a locally-conservative approximations of this variable. Here, we also provide a practical realization of the new LSFEM using high-order spectral mimetic finite element spaces (Kreeft et al., 2011) and report several numerical tests, which confirm its mimetic properties.« less

  4. An improved method to estimate reflectance parameters for high dynamic range imaging

    NASA Astrophysics Data System (ADS)

    Li, Shiying; Deguchi, Koichiro; Li, Renfa; Manabe, Yoshitsugu; Chihara, Kunihiro

    2008-01-01

    Two methods are described to accurately estimate diffuse and specular reflectance parameters for colors, gloss intensity and surface roughness, over the dynamic range of the camera used to capture input images. Neither method needs to segment color areas on an image, or to reconstruct a high dynamic range (HDR) image. The second method improves on the first, bypassing the requirement for specific separation of diffuse and specular reflection components. For the latter method, diffuse and specular reflectance parameters are estimated separately, using the least squares method. Reflection values are initially assumed to be diffuse-only reflection components, and are subjected to the least squares method to estimate diffuse reflectance parameters. Specular reflection components, obtained by subtracting the computed diffuse reflection components from reflection values, are then subjected to a logarithmically transformed equation of the Torrance-Sparrow reflection model, and specular reflectance parameters for gloss intensity and surface roughness are finally estimated using the least squares method. Experiments were carried out using both methods, with simulation data at different saturation levels, generated according to the Lambert and Torrance-Sparrow reflection models, and the second method, with spectral images captured by an imaging spectrograph and a moving light source. Our results show that the second method can estimate the diffuse and specular reflectance parameters for colors, gloss intensity and surface roughness more accurately and faster than the first one, so that colors and gloss can be reproduced more efficiently for HDR imaging.

  5. Anomalous structural transition of confined hard squares.

    PubMed

    Gurin, Péter; Varga, Szabolcs; Odriozola, Gerardo

    2016-11-01

    Structural transitions are examined in quasi-one-dimensional systems of freely rotating hard squares, which are confined between two parallel walls. We find two competing phases: one is a fluid where the squares have two sides parallel to the walls, while the second one is a solidlike structure with a zigzag arrangement of the squares. Using transfer matrix method we show that the configuration space consists of subspaces of fluidlike and solidlike phases, which are connected with low probability microstates of mixed structures. The existence of these connecting states makes the thermodynamic quantities continuous and precludes the possibility of a true phase transition. However, thermodynamic functions indicate strong tendency for the phase transition and our replica exchange Monte Carlo simulation study detects several important markers of the first order phase transition. The distinction of a phase transition from a structural change is practically impossible with simulations and experiments in such systems like the confined hard squares.

  6. An Algorithm for Computing Matrix Square Roots with Application to Riccati Equation Implementation,

    DTIC Science & Technology

    1977-01-01

    pansion is compared to Euclid’s method. The apriori by Aerospace Medical Research Laboratory, Aero— upper and lower bounds are also calculated. The third ... space Medical Division , Air Force Systems Command , part of this paper extends the scalar square root al— Wright—Patterson Air Force Base, Ohio 45433

  7. Using Least Squares to Solve Systems of Equations

    ERIC Educational Resources Information Center

    Tellinghuisen, Joel

    2016-01-01

    The method of least squares (LS) yields exact solutions for the adjustable parameters when the number of data values n equals the number of parameters "p". This holds also when the fit model consists of "m" different equations and "m = p", which means that LS algorithms can be used to obtain solutions to systems of…

  8. Sterically Hindered Square-Planar Nickel(II) Organometallic Complexes: Preparation, Characterization, and Substitution Behavior

    ERIC Educational Resources Information Center

    Martinez, Manuel; Muller, Guillermo; Rocamora, Merce; Rodriguez, Carlos

    2007-01-01

    The series of experiments proposed for advanced undergraduate students deal with both standard organometallic preparative methods in dry anaerobic conditions and with a kinetic study of the mechanisms operating in the substitution of square-planar complexes. The preparation of organometallic compounds is carried out by transmetallation or…

  9. F-Test Alternatives to Fisher's Exact Test and to the Chi-Square Test of Homogeneity in 2x2 Tables.

    ERIC Educational Resources Information Center

    Overall, John E.; Starbuck, Robert R.

    1983-01-01

    An alternative to Fisher's exact test and the chi-square test for homogeneity in two-by-two tables is developed. The method provides for Type I error rates which are closer to the stated alpha level than either of the alternatives. (JKS)

  10. Orthogonal Regression: A Teaching Perspective

    ERIC Educational Resources Information Center

    Carr, James R.

    2012-01-01

    A well-known approach to linear least squares regression is that which involves minimizing the sum of squared orthogonal projections of data points onto the best fit line. This form of regression is known as orthogonal regression, and the linear model that it yields is known as the major axis. A similar method, reduced major axis regression, is…

  11. Linear Least Squares for Correlated Data

    NASA Technical Reports Server (NTRS)

    Dean, Edwin B.

    1988-01-01

    Throughout the literature authors have consistently discussed the suspicion that regression results were less than satisfactory when the independent variables were correlated. Camm, Gulledge, and Womer, and Womer and Marcotte provide excellent applied examples of these concerns. Many authors have obtained partial solutions for this problem as discussed by Womer and Marcotte and Wonnacott and Wonnacott, which result in generalized least squares algorithms to solve restrictive cases. This paper presents a simple but relatively general multivariate method for obtaining linear least squares coefficients which are free of the statistical distortion created by correlated independent variables.

  12. Radiation and viscous dissipation effect on square porous annulus

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Badruddin, Irfan Anjum; Quadir, G. A.

    The present study is carried out to investigate the effect of radiation and viscous dissipation in a square porous annulus subjected to outside hot T{sub h} and inside cold T{sub c} temperature. The square annulus has a hollow section of dimension D×D at the interior of annulus. The flow is assumed to obey Darcy law. The governing equations are non-dimensionalised and solved with the help of finite element method. Results are discussed with respect to viscous dissipation parameter, radiation parameter and size of the hollow section of annulus.

  13. The least-squares mixing models to generate fraction images derived from remote sensing multispectral data

    NASA Technical Reports Server (NTRS)

    Shimabukuro, Yosio Edemir; Smith, James A.

    1991-01-01

    Constrained-least-squares and weighted-least-squares mixing models for generating fraction images derived from remote sensing multispectral data are presented. An experiment considering three components within the pixels-eucalyptus, soil (understory), and shade-was performed. The generated fraction images for shade (shade image) derived from these two methods were compared by considering the performance and computer time. The derived shade images are related to the observed variation in forest structure, i.e., the fraction of inferred shade in the pixel is related to different eucalyptus ages.

  14. Accumulated energy norm for full waveform inversion of marine data

    NASA Astrophysics Data System (ADS)

    Shin, Changsoo; Ha, Wansoo

    2017-12-01

    Macro-velocity models are important for imaging the subsurface structure. However, the conventional objective functions of full waveform inversion in the time and the frequency domain have a limited ability to recover the macro-velocity model because of the absence of low-frequency information. In this study, we propose new objective functions that can recover the macro-velocity model by minimizing the difference between the zero-frequency components of the square of seismic traces. Instead of the seismic trace itself, we use the square of the trace, which contains low-frequency information. We apply several time windows to the trace and obtain zero-frequency information of the squared trace for each time window. The shape of the new objective functions shows that they are suitable for local optimization methods. Since we use the acoustic wave equation in this study, this method can be used for deep-sea marine data, in which elastic effects can be ignored. We show that the zero-frequency components of the square of the seismic traces can be used to recover macro-velocities from synthetic and field data.

  15. A parallel algorithm for computing the eigenvalues of a symmetric tridiagonal matrix

    NASA Technical Reports Server (NTRS)

    Swarztrauber, Paul N.

    1993-01-01

    A parallel algorithm, called polysection, is presented for computing the eigenvalues of a symmetric tridiagonal matrix. The method is based on a quadratic recurrence in which the characteristic polynomial is constructed on a binary tree from polynomials whose degree doubles at each level. Intervals that contain exactly one zero are determined by the zeros of polynomials at the previous level which ensures that different processors compute different zeros. The signs of the polynomials at the interval endpoints are determined a priori and used to guarantee that all zeros are found. The use of finite-precision arithmetic may result in multiple zeros; however, in this case, the intervals coalesce and their number determines exactly the multiplicity of the zero. For an N x N matrix the eigenvalues can be determined in O(log-squared N) time with N-squared processors and O(N) time with N processors. The method is compared with a parallel variant of bisection that requires O(N-squared) time on a single processor, O(N) time with N processors, and O(log N) time with N-squared processors.

  16. Fast and Accurate Fitting and Filtering of Noisy Exponentials in Legendre Space

    PubMed Central

    Bao, Guobin; Schild, Detlev

    2014-01-01

    The parameters of experimentally obtained exponentials are usually found by least-squares fitting methods. Essentially, this is done by minimizing the mean squares sum of the differences between the data, most often a function of time, and a parameter-defined model function. Here we delineate a novel method where the noisy data are represented and analyzed in the space of Legendre polynomials. This is advantageous in several respects. First, parameter retrieval in the Legendre domain is typically two orders of magnitude faster than direct fitting in the time domain. Second, data fitting in a low-dimensional Legendre space yields estimates for amplitudes and time constants which are, on the average, more precise compared to least-squares-fitting with equal weights in the time domain. Third, the Legendre analysis of two exponentials gives satisfactory estimates in parameter ranges where least-squares-fitting in the time domain typically fails. Finally, filtering exponentials in the domain of Legendre polynomials leads to marked noise removal without the phase shift characteristic for conventional lowpass filters. PMID:24603904

  17. Nondestructive evaluation of soluble solid content in strawberry by near infrared spectroscopy

    NASA Astrophysics Data System (ADS)

    Guo, Zhiming; Huang, Wenqian; Chen, Liping; Wang, Xiu; Peng, Yankun

    This paper indicates the feasibility to use near infrared (NIR) spectroscopy combined with synergy interval partial least squares (siPLS) algorithms as a rapid nondestructive method to estimate the soluble solid content (SSC) in strawberry. Spectral preprocessing methods were optimized selected by cross-validation in the model calibration. Partial least squares (PLS) algorithm was conducted on the calibration of regression model. The performance of the final model was back-evaluated according to root mean square error of calibration (RMSEC) and correlation coefficient (R2 c) in calibration set, and tested by mean square error of prediction (RMSEP) and correlation coefficient (R2 p) in prediction set. The optimal siPLS model was obtained with after first derivation spectra preprocessing. The measurement results of best model were achieved as follow: RMSEC = 0.2259, R2 c = 0.9590 in the calibration set; and RMSEP = 0.2892, R2 p = 0.9390 in the prediction set. This work demonstrated that NIR spectroscopy and siPLS with efficient spectral preprocessing is a useful tool for nondestructively evaluation SSC in strawberry.

  18. Weighted least squares phase unwrapping based on the wavelet transform

    NASA Astrophysics Data System (ADS)

    Chen, Jiafeng; Chen, Haiqin; Yang, Zhengang; Ren, Haixia

    2007-01-01

    The weighted least squares phase unwrapping algorithm is a robust and accurate method to solve phase unwrapping problem. This method usually leads to a large sparse linear equation system. Gauss-Seidel relaxation iterative method is usually used to solve this large linear equation. However, this method is not practical due to its extremely slow convergence. The multigrid method is an efficient algorithm to improve convergence rate. However, this method needs an additional weight restriction operator which is very complicated. For this reason, the multiresolution analysis method based on the wavelet transform is proposed. By applying the wavelet transform, the original system is decomposed into its coarse and fine resolution levels and an equivalent equation system with better convergence condition can be obtained. Fast convergence in separate coarse resolution levels speeds up the overall system convergence rate. The simulated experiment shows that the proposed method converges faster and provides better result than the multigrid method.

  19. Cross-correlation least-squares reverse time migration in the pseudo-time domain

    NASA Astrophysics Data System (ADS)

    Li, Qingyang; Huang, Jianping; Li, Zhenchun

    2017-08-01

    The least-squares reverse time migration (LSRTM) method with higher image resolution and amplitude is becoming increasingly popular. However, the LSRTM is not widely used in field land data processing because of its sensitivity to the initial migration velocity model, large computational cost and mismatch of amplitudes between the synthetic and observed data. To overcome the shortcomings of the conventional LSRTM, we propose a cross-correlation least-squares reverse time migration algorithm in pseudo-time domain (PTCLSRTM). Our algorithm not only reduces the depth/velocity ambiguities, but also reduces the effect of velocity error on the imaging results. It relieves the accuracy requirements on the migration velocity model of least-squares migration (LSM). The pseudo-time domain algorithm eliminates the irregular wavelength sampling in the vertical direction, thus it can reduce the vertical grid points and memory requirements used during computation, which makes our method more computationally efficient than the standard implementation. Besides, for field data applications, matching the recorded amplitudes is a very difficult task because of the viscoelastic nature of the Earth and inaccuracies in the estimation of the source wavelet. To relax the requirement for strong amplitude matching of LSM, we extend the normalized cross-correlation objective function to the pseudo-time domain. Our method is only sensitive to the similarity between the predicted and the observed data. Numerical tests on synthetic and land field data confirm the effectiveness of our method and its adaptability for complex models.

  20. Rapid discrimination between buffalo and cow milk and detection of adulteration of buffalo milk with cow milk using synchronous fluorescence spectroscopy in combination with multivariate methods.

    PubMed

    Durakli Velioglu, Serap; Ercioglu, Elif; Boyaci, Ismail Hakki

    2017-05-01

    This research paper describes the potential of synchronous fluorescence (SF) spectroscopy for authentication of buffalo milk, a favourable raw material in the production of some premium dairy products. Buffalo milk is subjected to fraudulent activities like many other high priced foodstuffs. The current methods widely used for the detection of adulteration of buffalo milk have various disadvantages making them unattractive for routine analysis. Thus, the aim of the present study was to assess the potential of SF spectroscopy in combination with multivariate methods for rapid discrimination between buffalo and cow milk and detection of the adulteration of buffalo milk with cow milk. SF spectra of cow and buffalo milk samples were recorded between 400-550 nm excitation range with Δλ of 10-100 nm, in steps of 10 nm. The data obtained for ∆λ = 10 nm were utilised to classify the samples using principal component analysis (PCA), and detect the adulteration level of buffalo milk with cow milk using partial least square (PLS) methods. Successful discrimination of samples and detection of adulteration of buffalo milk with limit of detection value (LOD) of 6% are achieved with the models having root mean square error of calibration (RMSEC) and the root mean square error of cross-validation (RMSECV) and root mean square error of prediction (RMSEP) values of 2, 7, and 4%, respectively. The results reveal the potential of SF spectroscopy for rapid authentication of buffalo milk.

  1. From least squares to multilevel modeling: A graphical introduction to Bayesian inference

    NASA Astrophysics Data System (ADS)

    Loredo, Thomas J.

    2016-01-01

    This tutorial presentation will introduce some of the key ideas and techniques involved in applying Bayesian methods to problems in astrostatistics. The focus will be on the big picture: understanding the foundations (interpreting probability, Bayes's theorem, the law of total probability and marginalization), making connections to traditional methods (propagation of errors, least squares, chi-squared, maximum likelihood, Monte Carlo simulation), and highlighting problems where a Bayesian approach can be particularly powerful (Poisson processes, density estimation and curve fitting with measurement error). The "graphical" component of the title reflects an emphasis on pictorial representations of some of the math, but also on the use of graphical models (multilevel or hierarchical models) for analyzing complex data. Code for some examples from the talk will be available to participants, in Python and in the Stan probabilistic programming language.

  2. Accurate phase extraction algorithm based on Gram–Schmidt orthonormalization and least square ellipse fitting method

    NASA Astrophysics Data System (ADS)

    Lei, Hebing; Yao, Yong; Liu, Haopeng; Tian, Yiting; Yang, Yanfu; Gu, Yinglong

    2018-06-01

    An accurate algorithm by combing Gram-Schmidt orthonormalization and least square ellipse fitting technology is proposed, which could be used for phase extraction from two or three interferograms. The DC term of background intensity is suppressed by subtraction operation on three interferograms or by high-pass filter on two interferograms. Performing Gram-Schmidt orthonormalization on pre-processing interferograms, the phase shift error is corrected and a general ellipse form is derived. Then the background intensity error and the corrected error could be compensated by least square ellipse fitting method. Finally, the phase could be extracted rapidly. The algorithm could cope with the two or three interferograms with environmental disturbance, low fringe number or small phase shifts. The accuracy and effectiveness of the proposed algorithm are verified by both of the numerical simulations and experiments.

  3. A consensus least squares support vector regression (LS-SVR) for analysis of near-infrared spectra of plant samples.

    PubMed

    Li, Yankun; Shao, Xueguang; Cai, Wensheng

    2007-04-15

    Consensus modeling of combining the results of multiple independent models to produce a single prediction avoids the instability of single model. Based on the principle of consensus modeling, a consensus least squares support vector regression (LS-SVR) method for calibrating the near-infrared (NIR) spectra was proposed. In the proposed approach, NIR spectra of plant samples were firstly preprocessed using discrete wavelet transform (DWT) for filtering the spectral background and noise, then, consensus LS-SVR technique was used for building the calibration model. With an optimization of the parameters involved in the modeling, a satisfied model was achieved for predicting the content of reducing sugar in plant samples. The predicted results show that consensus LS-SVR model is more robust and reliable than the conventional partial least squares (PLS) and LS-SVR methods.

  4. Direct quantification of test bacteria in synthetic water-polluted samples by square wave voltammetry and chemometric methods.

    PubMed

    Carpani, Irene; Conti, Paolo; Lanteri, Silvia; Legnani, Pier Paolo; Leoni, Erica; Tonelli, Domenica

    2008-02-28

    A home-made microelectrode array, based on reticulated vitreous carbon, was used as working electrode in square wave voltammetry experiments to quantify the bacterial load of Escherichia coli ATCC 13706 and Pseudomonas aeruginosa ATCC 27853, chosen as test microorganisms, in synthetic samples similar to drinking water (phosphate buffer). Raw electrochemical signals were analysed with partial least squares regression coupled to variable selection in order to correlate these values with the bacterial load estimated by aerobic plate counting. The results demonstrated the ability of the method to detect even low loads of microorganisms in synthetic water samples. In particular, the model detects the bacterial load in the range 3-2,020 CFU ml(-1) for E. coli and in the range 76-155,556 CFU ml(-1) for P. aeruginosa.

  5. A new family of stable elements for the Stokes problem based on a mixed Galerkin/least-squares finite element formulation

    NASA Technical Reports Server (NTRS)

    Franca, Leopoldo P.; Loula, Abimael F. D.; Hughes, Thomas J. R.; Miranda, Isidoro

    1989-01-01

    Adding to the classical Hellinger-Reissner formulation, a residual form of the equilibrium equation, a new Galerkin/least-squares finite element method is derived. It fits within the framework of a mixed finite element method and is stable for rather general combinations of stress and velocity interpolations, including equal-order discontinuous stress and continuous velocity interpolations which are unstable within the Galerkin approach. Error estimates are presented based on a generalization of the Babuska-Brezzi theory. Numerical results (not presented herein) have confirmed these estimates as well as the good accuracy and stability of the method.

  6. Nonlinear least squares regression for single image scanning electron microscope signal-to-noise ratio estimation.

    PubMed

    Sim, K S; Norhisham, S

    2016-11-01

    A new method based on nonlinear least squares regression (NLLSR) is formulated to estimate signal-to-noise ratio (SNR) of scanning electron microscope (SEM) images. The estimation of SNR value based on NLLSR method is compared with the three existing methods of nearest neighbourhood, first-order interpolation and the combination of both nearest neighbourhood and first-order interpolation. Samples of SEM images with different textures, contrasts and edges were used to test the performance of NLLSR method in estimating the SNR values of the SEM images. It is shown that the NLLSR method is able to produce better estimation accuracy as compared to the other three existing methods. According to the SNR results obtained from the experiment, the NLLSR method is able to produce approximately less than 1% of SNR error difference as compared to the other three existing methods. © 2016 The Authors Journal of Microscopy © 2016 Royal Microscopical Society.

  7. Sensitivity test of derivative matrix isopotential synchronous fluorimetry and least squares fitting methods.

    PubMed

    Makkai, Géza; Buzády, Andrea; Erostyák, János

    2010-01-01

    Determination of concentrations of spectrally overlapping compounds has special difficulties. Several methods are available to calculate the constituents' concentrations in moderately complex mixtures. A method which can provide information about spectrally hidden components in mixtures is very useful. Two methods powerful in resolving spectral components are compared in this paper. The first method tested is the Derivative Matrix Isopotential Synchronous Fluorimetry (DMISF). It is based on derivative analysis of MISF spectra, which are constructed using isopotential trajectories in the Excitation-Emission Matrix (EEM) of background solution. For DMISF method, a mathematical routine fitting the 3D data of EEMs was developed. The other method tested uses classical Least Squares Fitting (LSF) algorithm, wherein Rayleigh- and Raman-scattering bands may lead to complications. Both methods give excellent sensitivity and have advantages against each other. Detection limits of DMISF and LSF have been determined at very different concentration and noise levels.

  8. NLINEAR - NONLINEAR CURVE FITTING PROGRAM

    NASA Technical Reports Server (NTRS)

    Everhart, J. L.

    1994-01-01

    A common method for fitting data is a least-squares fit. In the least-squares method, a user-specified fitting function is utilized in such a way as to minimize the sum of the squares of distances between the data points and the fitting curve. The Nonlinear Curve Fitting Program, NLINEAR, is an interactive curve fitting routine based on a description of the quadratic expansion of the chi-squared statistic. NLINEAR utilizes a nonlinear optimization algorithm that calculates the best statistically weighted values of the parameters of the fitting function and the chi-square that is to be minimized. The inputs to the program are the mathematical form of the fitting function and the initial values of the parameters to be estimated. This approach provides the user with statistical information such as goodness of fit and estimated values of parameters that produce the highest degree of correlation between the experimental data and the mathematical model. In the mathematical formulation of the algorithm, the Taylor expansion of chi-square is first introduced, and justification for retaining only the first term are presented. From the expansion, a set of n simultaneous linear equations are derived, which are solved by matrix algebra. To achieve convergence, the algorithm requires meaningful initial estimates for the parameters of the fitting function. NLINEAR is written in Fortran 77 for execution on a CDC Cyber 750 under NOS 2.3. It has a central memory requirement of 5K 60 bit words. Optionally, graphical output of the fitting function can be plotted. Tektronix PLOT-10 routines are required for graphics. NLINEAR was developed in 1987.

  9. GSTAR-SUR Modeling With Calendar Variations And Intervention To Forecast Outflow Of Currencies In Java Indonesia

    NASA Astrophysics Data System (ADS)

    Akbar, M. S.; Setiawan; Suhartono; Ruchjana, B. N.; Riyadi, M. A. A.

    2018-03-01

    Ordinary Least Squares (OLS) is general method to estimates Generalized Space Time Autoregressive (GSTAR) parameters. But in some cases, the residuals of GSTAR are correlated between location. If OLS is applied to this case, then the estimators are inefficient. Generalized Least Squares (GLS) is a method used in Seemingly Unrelated Regression (SUR) model. This method estimated parameters of some models with residuals between equations are correlated. Simulation study shows that GSTAR with GLS method for estimating parameters (GSTAR-SUR) is more efficient than GSTAR-OLS method. The purpose of this research is to apply GSTAR-SUR with calendar variation and intervention as exogenous variable (GSTARX-SUR) for forecast outflow of currency in Java, Indonesia. As a result, GSTARX-SUR provides better performance than GSTARX-OLS.

  10. Concerning an application of the method of least squares with a variable weight matrix

    NASA Technical Reports Server (NTRS)

    Sukhanov, A. A.

    1979-01-01

    An estimate of a state vector for a physical system when the weight matrix in the method of least squares is a function of this vector is considered. An iterative procedure is proposed for calculating the desired estimate. Conditions for the existence and uniqueness of the limit of this procedure are obtained, and a domain is found which contains the limit estimate. A second method for calculating the desired estimate which reduces to the solution of a system of algebraic equations is proposed. The question of applying Newton's method of tangents to solving the given system of algebraic equations is considered and conditions for the convergence of the modified Newton's method are obtained. Certain properties of the estimate obtained are presented together with an example.

  11. Sound field simulation and acoustic animation in urban squares

    NASA Astrophysics Data System (ADS)

    Kang, Jian; Meng, Yan

    2005-04-01

    Urban squares are important components of cities, and the acoustic environment is important for their usability. While models and formulae for predicting the sound field in urban squares are important for their soundscape design and improvement, acoustic animation tools would be of great importance for designers as well as for public participation process, given that below a certain sound level, the soundscape evaluation depends mainly on the type of sounds rather than the loudness. This paper first briefly introduces acoustic simulation models developed for urban squares, as well as empirical formulae derived from a series of simulation. It then presents an acoustic animation tool currently being developed. In urban squares there are multiple dynamic sound sources, so that the computation time becomes a main concern. Nevertheless, the requirements for acoustic animation in urban squares are relatively low compared to auditoria. As a result, it is important to simplify the simulation process and algorithms. Based on a series of subjective tests in a virtual reality environment with various simulation parameters, a fast simulation method with acceptable accuracy has been explored. [Work supported by the European Commission.

  12. Formation mechanism of dot-line square superlattice pattern in dielectric barrier discharge

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Weibo; Dong, Lifang, E-mail: donglfhbu@163.com, E-mail: pyy1616@163.com; Wang, Yongjie

    We investigate the formation mechanism of the dot-line square superlattice pattern (DLSSP) in dielectric barrier discharge. The spatio-temporal structure studied by using the intensified-charge coupled device camera shows that the DLSSP is an interleaving of three different subpatterns in one half voltage cycle. The dot square lattice discharges first and, then, the two kinds of line square lattices, which form square grid structures discharge twice. When the gas pressure is varied, DLSSP can transform from square superlattice pattern (SSP). The spectral line profile method is used to compare the electron densities, which represent the amounts of surface charges qualitatively. Itmore » is found that the amount of surface charges accumulated by the first discharge of DLSSP is less than that of SSP, leading to a bigger discharge area of the following discharge (lines of DLSSP instead of halos of SSP). The spatial distribution of the electric field of the surface charges is simulated to explain the formation of DLSSP. This paper may provide a deeper understanding for the formation mechanism of complex superlattice patterns in DBD.« less

  13. Approximating a retarded-advanced differential equation that models human phonation

    NASA Astrophysics Data System (ADS)

    Teodoro, M. Filomena

    2017-11-01

    In [1, 2, 3] we have got the numerical solution of a linear mixed type functional differential equation (MTFDE) introduced initially in [4], considering the autonomous and non-autonomous case by collocation, least squares and finite element methods considering B-splines basis set. The present work introduces a numerical scheme using least squares method (LSM) and Gaussian basis functions to solve numerically a nonlinear mixed type equation with symmetric delay and advance which models human phonation. The preliminary results are promising. We obtain an accuracy comparable with the previous results.

  14. Multi-Gaussian fitting for pulse waveform using Weighted Least Squares and multi-criteria decision making method.

    PubMed

    Wang, Lu; Xu, Lisheng; Feng, Shuting; Meng, Max Q-H; Wang, Kuanquan

    2013-11-01

    Analysis of pulse waveform is a low cost, non-invasive method for obtaining vital information related to the conditions of the cardiovascular system. In recent years, different Pulse Decomposition Analysis (PDA) methods have been applied to disclose the pathological mechanisms of the pulse waveform. All these methods decompose single-period pulse waveform into a constant number (such as 3, 4 or 5) of individual waves. Furthermore, those methods do not pay much attention to the estimation error of the key points in the pulse waveform. The estimation of human vascular conditions depends on the key points' positions of pulse wave. In this paper, we propose a Multi-Gaussian (MG) model to fit real pulse waveforms using an adaptive number (4 or 5 in our study) of Gaussian waves. The unknown parameters in the MG model are estimated by the Weighted Least Squares (WLS) method and the optimized weight values corresponding to different sampling points are selected by using the Multi-Criteria Decision Making (MCDM) method. Performance of the MG model and the WLS method has been evaluated by fitting 150 real pulse waveforms of five different types. The resulting Normalized Root Mean Square Error (NRMSE) was less than 2.0% and the estimation accuracy for the key points was satisfactory, demonstrating that our proposed method is effective in compressing, synthesizing and analyzing pulse waveforms. Copyright © 2013 Elsevier Ltd. All rights reserved.

  15. Vehicle Sprung Mass Estimation for Rough Terrain

    DTIC Science & Technology

    2011-03-01

    distributions are greater than zero. The multivariate polynomials are functions of the Legendre polynomials (Poularikas (1999...developed methods based on polynomial chaos theory and on the maximum likelihood approach to estimate the most likely value of the vehicle sprung...mass. The polynomial chaos estimator is compared to benchmark algorithms including recursive least squares, recursive total least squares, extended

  16. Foundations for estimation by the method of least squares

    NASA Technical Reports Server (NTRS)

    Hauck, W. W., Jr.

    1971-01-01

    Least squares estimation is discussed from the point of view of a statistician. Much of the emphasis is on problems encountered in application and, more specifically, on questions involving assumptions: what assumptions are needed, when are they needed, what happens if they are not valid, and if they are invalid, how that fact can be detected.

  17. Optimization of one-way wave equations.

    USGS Publications Warehouse

    Lee, M.W.; Suh, S.Y.

    1985-01-01

    The theory of wave extrapolation is based on the square-root equation or one-way equation. The full wave equation represents waves which propagate in both directions. On the contrary, the square-root equation represents waves propagating in one direction only. A new optimization method presented here improves the dispersion relation of the one-way wave equation. -from Authors

  18. 14 CFR 420.23 - Launch site location review-flight corridor.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... this part, to contain debris with a ballistic coefficient of ≥ 3 pounds per square foot, from any non... that its proposed method provides an equivalent level of safety to that required by appendix A or B of... of ≥ 3 pounds per square foot, from any non-nominal flight of a guided sub-orbital expendable launch...

  19. Sample Size Calculation for Estimating or Testing a Nonzero Squared Multiple Correlation Coefficient

    ERIC Educational Resources Information Center

    Krishnamoorthy, K.; Xia, Yanping

    2008-01-01

    The problems of hypothesis testing and interval estimation of the squared multiple correlation coefficient of a multivariate normal distribution are considered. It is shown that available one-sided tests are uniformly most powerful, and the one-sided confidence intervals are uniformly most accurate. An exact method of calculating sample size to…

  20. An Extension of Least Squares Estimation of IRT Linking Coefficients for the Graded Response Model

    ERIC Educational Resources Information Center

    Kim, Seonghoon

    2010-01-01

    The three types (generalized, unweighted, and weighted) of least squares methods, proposed by Ogasawara, for estimating item response theory (IRT) linking coefficients under dichotomous models are extended to the graded response model. A simulation study was conducted to confirm the accuracy of the extended formulas, and a real data study was…

  1. Metamaterial composition comprising frequency-selective-surface resonant element disposed on/in a dielectric flake, methods, and applications

    DOEpatents

    Shelton, David; Boreman, Glenn; D'Archangel, Jeffrey

    2015-11-10

    Infrared metamaterial arrays containing Au elements immersed in a medium of benzocyclobutene (BCB) were fabricated and selectively etched to produce small square flakes with edge dimensions of approximately 20 .mu.m. Two unit-cell designs were fabricated: one employed crossed-dipole elements while the other utilized square-loop elements.

  2. Stability and square integrability of derivatives of solutions of nonlinear fourth order differential equations with delay.

    PubMed

    Korkmaz, Erdal

    2017-01-01

    In this paper, we give sufficient conditions for the boundedness, uniform asymptotic stability and square integrability of the solutions to a certain fourth order non-autonomous differential equations with delay by using Lyapunov's second method. The results obtained essentially improve, include and complement the results in the literature.

  3. Optimal Least-Squares Unidimensional Scaling: Improved Branch-and-Bound Procedures and Comparison to Dynamic Programming

    ERIC Educational Resources Information Center

    Brusco, Michael J.; Stahl, Stephanie

    2005-01-01

    There are two well-known methods for obtaining a guaranteed globally optimal solution to the problem of least-squares unidimensional scaling of a symmetric dissimilarity matrix: (a) dynamic programming, and (b) branch-and-bound. Dynamic programming is generally more efficient than branch-and-bound, but the former is limited to matrices with…

  4. Comparison between the basic least squares and the Bayesian approach for elastic constants identification

    NASA Astrophysics Data System (ADS)

    Gogu, C.; Haftka, R.; LeRiche, R.; Molimard, J.; Vautrin, A.; Sankar, B.

    2008-11-01

    The basic formulation of the least squares method, based on the L2 norm of the misfit, is still widely used today for identifying elastic material properties from experimental data. An alternative statistical approach is the Bayesian method. We seek here situations with significant difference between the material properties found by the two methods. For a simple three bar truss example we illustrate three such situations in which the Bayesian approach leads to more accurate results: different magnitude of the measurements, different uncertainty in the measurements and correlation among measurements. When all three effects add up, the Bayesian approach can have a large advantage. We then compared the two methods for identification of elastic constants from plate vibration natural frequencies.

  5. Genetic Algorithm for Initial Orbit Determination with Too Short Arc (Continued)

    NASA Astrophysics Data System (ADS)

    Li, X. R.; Wang, X.

    2016-03-01

    When using the genetic algorithm to solve the problem of too-short-arc (TSA) determination, due to the difference of computing processes between the genetic algorithm and classical method, the methods for outliers editing are no longer applicable. In the genetic algorithm, the robust estimation is acquired by means of using different loss functions in the fitness function, then the outlier problem of TSAs is solved. Compared with the classical method, the application of loss functions in the genetic algorithm is greatly simplified. Through the comparison of results of different loss functions, it is clear that the methods of least median square and least trimmed square can greatly improve the robustness of TSAs, and have a high breakdown point.

  6. The derivation of vector magnetic fields from Stokes profiles - Integral versus least squares fitting techniques

    NASA Technical Reports Server (NTRS)

    Ronan, R. S.; Mickey, D. L.; Orrall, F. Q.

    1987-01-01

    The results of two methods for deriving photospheric vector magnetic fields from the Zeeman effect, as observed in the Fe I line at 6302.5 A at high spectral resolution (45 mA), are compared. The first method does not take magnetooptical effects into account, but determines the vector magnetic field from the integral properties of the Stokes profiles. The second method is an iterative least-squares fitting technique which fits the observed Stokes profiles to the profiles predicted by the Unno-Rachkovsky solution to the radiative transfer equation. For sunspot fields above about 1500 gauss, the two methods are found to agree in derived azimuthal and inclination angles to within about + or - 20 deg.

  7. Parallel block schemes for large scale least squares computations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Golub, G.H.; Plemmons, R.J.; Sameh, A.

    1986-04-01

    Large scale least squares computations arise in a variety of scientific and engineering problems, including geodetic adjustments and surveys, medical image analysis, molecular structures, partial differential equations and substructuring methods in structural engineering. In each of these problems, matrices often arise which possess a block structure which reflects the local connection nature of the underlying physical problem. For example, such super-large nonlinear least squares computations arise in geodesy. Here the coordinates of positions are calculated by iteratively solving overdetermined systems of nonlinear equations by the Gauss-Newton method. The US National Geodetic Survey will complete this year (1986) the readjustment ofmore » the North American Datum, a problem which involves over 540 thousand unknowns and over 6.5 million observations (equations). The observation matrix for these least squares computations has a block angular form with 161 diagnonal blocks, each containing 3 to 4 thousand unknowns. In this paper parallel schemes are suggested for the orthogonal factorization of matrices in block angular form and for the associated backsubstitution phase of the least squares computations. In addition, a parallel scheme for the calculation of certain elements of the covariance matrix for such problems is described. It is shown that these algorithms are ideally suited for multiprocessors with three levels of parallelism such as the Cedar system at the University of Illinois. 20 refs., 7 figs.« less

  8. [Establishment of the Mathematical Model for PMI Estimation Using FTIR Spectroscopy and Data Mining Method].

    PubMed

    Wang, L; Qin, X C; Lin, H C; Deng, K F; Luo, Y W; Sun, Q R; Du, Q X; Wang, Z Y; Tuo, Y; Sun, J H

    2018-02-01

    To analyse the relationship between Fourier transform infrared (FTIR) spectrum of rat's spleen tissue and postmortem interval (PMI) for PMI estimation using FTIR spectroscopy combined with data mining method. Rats were sacrificed by cervical dislocation, and the cadavers were placed at 20 ℃. The FTIR spectrum data of rats' spleen tissues were taken and measured at different time points. After pretreatment, the data was analysed by data mining method. The absorption peak intensity of rat's spleen tissue spectrum changed with the PMI, while the absorption peak position was unchanged. The results of principal component analysis (PCA) showed that the cumulative contribution rate of the first three principal components was 96%. There was an obvious clustering tendency for the spectrum sample at each time point. The methods of partial least squares discriminant analysis (PLS-DA) and support vector machine classification (SVMC) effectively divided the spectrum samples with different PMI into four categories (0-24 h, 48-72 h, 96-120 h and 144-168 h). The determination coefficient ( R ²) of the PMI estimation model established by PLS regression analysis was 0.96, and the root mean square error of calibration (RMSEC) and root mean square error of cross validation (RMSECV) were 9.90 h and 11.39 h respectively. In prediction set, the R ² was 0.97, and the root mean square error of prediction (RMSEP) was 10.49 h. The FTIR spectrum of the rat's spleen tissue can be effectively analyzed qualitatively and quantitatively by the combination of FTIR spectroscopy and data mining method, and the classification and PLS regression models can be established for PMI estimation. Copyright© by the Editorial Department of Journal of Forensic Medicine.

  9. Spline based least squares integration for two-dimensional shape or wavefront reconstruction

    DOE PAGES

    Huang, Lei; Xue, Junpeng; Gao, Bo; ...

    2016-12-21

    In this paper, we present a novel method to handle two-dimensional shape or wavefront reconstruction from its slopes. The proposed integration method employs splines to fit the measured slope data with piecewise polynomials and uses the analytical polynomial functions to represent the height changes in a lateral spacing with the pre-determined spline coefficients. The linear least squares method is applied to estimate the height or wavefront as a final result. Numerical simulations verify that the proposed method has less algorithm errors than two other existing methods used for comparison. Especially at the boundaries, the proposed method has better performance. Themore » noise influence is studied by adding white Gaussian noise to the slope data. Finally, experimental data from phase measuring deflectometry are tested to demonstrate the feasibility of the new method in a practical measurement.« less

  10. Spline based least squares integration for two-dimensional shape or wavefront reconstruction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Lei; Xue, Junpeng; Gao, Bo

    In this paper, we present a novel method to handle two-dimensional shape or wavefront reconstruction from its slopes. The proposed integration method employs splines to fit the measured slope data with piecewise polynomials and uses the analytical polynomial functions to represent the height changes in a lateral spacing with the pre-determined spline coefficients. The linear least squares method is applied to estimate the height or wavefront as a final result. Numerical simulations verify that the proposed method has less algorithm errors than two other existing methods used for comparison. Especially at the boundaries, the proposed method has better performance. Themore » noise influence is studied by adding white Gaussian noise to the slope data. Finally, experimental data from phase measuring deflectometry are tested to demonstrate the feasibility of the new method in a practical measurement.« less

  11. The least-squares finite element method for low-mach-number compressible viscous flows

    NASA Technical Reports Server (NTRS)

    Yu, Sheng-Tao

    1994-01-01

    The present paper reports the development of the Least-Squares Finite Element Method (LSFEM) for simulating compressible viscous flows at low Mach numbers in which the incompressible flows pose as an extreme. Conventional approach requires special treatments for low-speed flows calculations: finite difference and finite volume methods are based on the use of the staggered grid or the preconditioning technique; and, finite element methods rely on the mixed method and the operator-splitting method. In this paper, however, we show that such difficulty does not exist for the LSFEM and no special treatment is needed. The LSFEM always leads to a symmetric, positive-definite matrix through which the compressible flow equations can be effectively solved. Two numerical examples are included to demonstrate the method: first, driven cavity flows at various Reynolds numbers; and, buoyancy-driven flows with significant density variation. Both examples are calculated by using full compressible flow equations.

  12. A partial least squares based spectrum normalization method for uncertainty reduction for laser-induced breakdown spectroscopy measurements

    NASA Astrophysics Data System (ADS)

    Li, Xiongwei; Wang, Zhe; Lui, Siu-Lung; Fu, Yangting; Li, Zheng; Liu, Jianming; Ni, Weidou

    2013-10-01

    A bottleneck of the wide commercial application of laser-induced breakdown spectroscopy (LIBS) technology is its relatively high measurement uncertainty. A partial least squares (PLS) based normalization method was proposed to improve pulse-to-pulse measurement precision for LIBS based on our previous spectrum standardization method. The proposed model utilized multi-line spectral information of the measured element and characterized the signal fluctuations due to the variation of plasma characteristic parameters (plasma temperature, electron number density, and total number density) for signal uncertainty reduction. The model was validated by the application of copper concentration prediction in 29 brass alloy samples. The results demonstrated an improvement on both measurement precision and accuracy over the generally applied normalization as well as our previously proposed simplified spectrum standardization method. The average relative standard deviation (RSD), average of the standard error (error bar), the coefficient of determination (R2), the root-mean-square error of prediction (RMSEP), and average value of the maximum relative error (MRE) were 1.80%, 0.23%, 0.992, 1.30%, and 5.23%, respectively, while those for the generally applied spectral area normalization were 3.72%, 0.71%, 0.973, 1.98%, and 14.92%, respectively.

  13. On Quantile Regression in Reproducing Kernel Hilbert Spaces with Data Sparsity Constraint

    PubMed Central

    Zhang, Chong; Liu, Yufeng; Wu, Yichao

    2015-01-01

    For spline regressions, it is well known that the choice of knots is crucial for the performance of the estimator. As a general learning framework covering the smoothing splines, learning in a Reproducing Kernel Hilbert Space (RKHS) has a similar issue. However, the selection of training data points for kernel functions in the RKHS representation has not been carefully studied in the literature. In this paper we study quantile regression as an example of learning in a RKHS. In this case, the regular squared norm penalty does not perform training data selection. We propose a data sparsity constraint that imposes thresholding on the kernel function coefficients to achieve a sparse kernel function representation. We demonstrate that the proposed data sparsity method can have competitive prediction performance for certain situations, and have comparable performance in other cases compared to that of the traditional squared norm penalty. Therefore, the data sparsity method can serve as a competitive alternative to the squared norm penalty method. Some theoretical properties of our proposed method using the data sparsity constraint are obtained. Both simulated and real data sets are used to demonstrate the usefulness of our data sparsity constraint. PMID:27134575

  14. Noncontact analysis of the fiber weight per unit area in prepreg by near-infrared spectroscopy.

    PubMed

    Jiang, B; Huang, Y D

    2008-05-26

    The fiber weight per unit area in prepreg is an important factor to ensure the quality of the composite products. Near-infrared spectroscopy (NIRS) technology together with a noncontact reflectance sources has been applied for quality analysis of the fiber weight per unit area. The range of the unit area fiber weight was 13.39-14.14mgcm(-2). The regression method was employed by partial least squares (PLS) and principal components regression (PCR). The calibration model was developed by 55 samples to determine the fiber weight per unit area in prepreg. The determination coefficient (R(2)), root mean square error of calibration (RMSEC) and root mean square error of prediction (RMSEP) were 0.82, 0.092, 0.099, respectively. The predicted values of the fiber weight per unit area in prepreg measured by NIRS technology were comparable to the values obtained by the reference method. For this technology, the noncontact reflectance sources focused directly on the sample with neither previous treatment nor manipulation. The results of the paired t-test revealed that there was no significant difference between the NIR method and the reference method. Besides, the prepreg could be analyzed one time within 20s without sample destruction.

  15. Quantitative methods for structural characterization of proteins based on deep UV resonance Raman spectroscopy.

    PubMed

    Shashilov, Victor A; Sikirzhytski, Vitali; Popova, Ludmila A; Lednev, Igor K

    2010-09-01

    Here we report on novel quantitative approaches for protein structural characterization using deep UV resonance Raman (DUVRR) spectroscopy. Specifically, we propose a new method combining hydrogen-deuterium (HD) exchange and Bayesian source separation for extracting the DUVRR signatures of various structural elements of aggregated proteins including the cross-beta core and unordered parts of amyloid fibrils. The proposed method is demonstrated using the set of DUVRR spectra of hen egg white lysozyme acquired at various stages of HD exchange. Prior information about the concentration matrix and the spectral features of the individual components was incorporated into the Bayesian equation to eliminate the ill-conditioning of the problem caused by 100% correlation of the concentration profiles of protonated and deuterated species. Secondary structure fractions obtained by partial least squares (PLS) and least squares support vector machines (LS-SVMs) were used as the initial guess for the Bayessian source separation. Advantages of the PLS and LS-SVMs methods over the classical least squares calibration (CLSC) are discussed and illustrated using the DUVRR data of the prion protein in its native and aggregated forms. Copyright (c) 2010 Elsevier Inc. All rights reserved.

  16. Filter Tuning Using the Chi-Squared Statistic

    NASA Technical Reports Server (NTRS)

    Lilly-Salkowski, Tyler B.

    2017-01-01

    This paper examines the use of the Chi-square statistic as a means of evaluating filter performance. The goal of the process is to characterize the filter performance in the metric of covariance realism. The Chi-squared statistic is the value calculated to determine the realism of a covariance based on the prediction accuracy and the covariance values at a given point in time. Once calculated, it is the distribution of this statistic that provides insight on the accuracy of the covariance. The process of tuning an Extended Kalman Filter (EKF) for Aqua and Aura support is described, including examination of the measurement errors of available observation types, and methods of dealing with potentially volatile atmospheric drag modeling. Predictive accuracy and the distribution of the Chi-squared statistic, calculated from EKF solutions, are assessed.

  17. A Study on the Stream Cipher Embedded Magic Square of Random Access Files

    NASA Astrophysics Data System (ADS)

    Liu, Chenglian; Zhao, Jian-Ming; Rafsanjani, Marjan Kuchaki; Shen, Yijuan

    2011-09-01

    Magic square and stream cipher issues are both interesting and well-tried topics. In this paper, we are proposing a new scheme which streams cipher applications for random access files based on the magic square method. There are two thresholds required to secure our data, if using only decrypts by the stream cipher. It isn't to recovery original source. On other hand, we improve the model of cipher stream to strengthen and defend efficiently; it also was its own high speed and calculates to most parts of the key stream generator.

  18. Least square neural network model of the crude oil blending process.

    PubMed

    Rubio, José de Jesús

    2016-06-01

    In this paper, the recursive least square algorithm is designed for the big data learning of a feedforward neural network. The proposed method as the combination of the recursive least square and feedforward neural network obtains four advantages over the alone algorithms: it requires less number of regressors, it is fast, it has the learning ability, and it is more compact. Stability, convergence, boundedness of parameters, and local minimum avoidance of the proposed technique are guaranteed. The introduced strategy is applied for the modeling of the crude oil blending process. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. Improving the Ability of Mathematic Representation Capabilities and Students Skills in Importing Square Forms to Square Using Variation Solutions

    NASA Astrophysics Data System (ADS)

    Nirawati, R.

    2018-04-01

    This research was conducted to see whether the variation of the solution is acceptable and easy to understand by students with different level of ability so that it can be seen the difference of students ability in facilitating the quadratic form in the upper, middle and lower groups. This research used experimental method with factorial design. Based on the result of final test analysis, there were differences of students ability in upper group, medium group, and lower group in putting squared form based on the use certain variation of solution.

  20. Probability distribution functions for unit hydrographs with optimization using genetic algorithm

    NASA Astrophysics Data System (ADS)

    Ghorbani, Mohammad Ali; Singh, Vijay P.; Sivakumar, Bellie; H. Kashani, Mahsa; Atre, Atul Arvind; Asadi, Hakimeh

    2017-05-01

    A unit hydrograph (UH) of a watershed may be viewed as the unit pulse response function of a linear system. In recent years, the use of probability distribution functions (pdfs) for determining a UH has received much attention. In this study, a nonlinear optimization model is developed to transmute a UH into a pdf. The potential of six popular pdfs, namely two-parameter gamma, two-parameter Gumbel, two-parameter log-normal, two-parameter normal, three-parameter Pearson distribution, and two-parameter Weibull is tested on data from the Lighvan catchment in Iran. The probability distribution parameters are determined using the nonlinear least squares optimization method in two ways: (1) optimization by programming in Mathematica; and (2) optimization by applying genetic algorithm. The results are compared with those obtained by the traditional linear least squares method. The results show comparable capability and performance of two nonlinear methods. The gamma and Pearson distributions are the most successful models in preserving the rising and recession limbs of the unit hydographs. The log-normal distribution has a high ability in predicting both the peak flow and time to peak of the unit hydrograph. The nonlinear optimization method does not outperform the linear least squares method in determining the UH (especially for excess rainfall of one pulse), but is comparable.

  1. Limited-memory BFGS based least-squares pre-stack Kirchhoff depth migration

    NASA Astrophysics Data System (ADS)

    Wu, Shaojiang; Wang, Yibo; Zheng, Yikang; Chang, Xu

    2015-08-01

    Least-squares migration (LSM) is a linearized inversion technique for subsurface reflectivity estimation. Compared to conventional migration algorithms, it can improve spatial resolution significantly with a few iterative calculations. There are three key steps in LSM, (1) calculate data residuals between observed data and demigrated data using the inverted reflectivity model; (2) migrate data residuals to form reflectivity gradient and (3) update reflectivity model using optimization methods. In order to obtain an accurate and high-resolution inversion result, the good estimation of inverse Hessian matrix plays a crucial role. However, due to the large size of Hessian matrix, the inverse matrix calculation is always a tough task. The limited-memory BFGS (L-BFGS) method can evaluate the Hessian matrix indirectly using a limited amount of computer memory which only maintains a history of the past m gradients (often m < 10). We combine the L-BFGS method with least-squares pre-stack Kirchhoff depth migration. Then, we validate the introduced approach by the 2-D Marmousi synthetic data set and a 2-D marine data set. The results show that the introduced method can effectively obtain reflectivity model and has a faster convergence rate with two comparison gradient methods. It might be significant for general complex subsurface imaging.

  2. Identification of Reliable Components in Multivariate Curve Resolution-Alternating Least Squares (MCR-ALS): a Data-Driven Approach across Metabolic Processes.

    PubMed

    Motegi, Hiromi; Tsuboi, Yuuri; Saga, Ayako; Kagami, Tomoko; Inoue, Maki; Toki, Hideaki; Minowa, Osamu; Noda, Tetsuo; Kikuchi, Jun

    2015-11-04

    There is an increasing need to use multivariate statistical methods for understanding biological functions, identifying the mechanisms of diseases, and exploring biomarkers. In addition to classical analyses such as hierarchical cluster analysis, principal component analysis, and partial least squares discriminant analysis, various multivariate strategies, including independent component analysis, non-negative matrix factorization, and multivariate curve resolution, have recently been proposed. However, determining the number of components is problematic. Despite the proposal of several different methods, no satisfactory approach has yet been reported. To resolve this problem, we implemented a new idea: classifying a component as "reliable" or "unreliable" based on the reproducibility of its appearance, regardless of the number of components in the calculation. Using the clustering method for classification, we applied this idea to multivariate curve resolution-alternating least squares (MCR-ALS). Comparisons between conventional and modified methods applied to proton nuclear magnetic resonance ((1)H-NMR) spectral datasets derived from known standard mixtures and biological mixtures (urine and feces of mice) revealed that more plausible results are obtained by the modified method. In particular, clusters containing little information were detected with reliability. This strategy, named "cluster-aided MCR-ALS," will facilitate the attainment of more reliable results in the metabolomics datasets.

  3. New robust bilinear least squares method for the analysis of spectral-pH matrix data.

    PubMed

    Goicoechea, Héctor C; Olivieri, Alejandro C

    2005-07-01

    A new second-order multivariate method has been developed for the analysis of spectral-pH matrix data, based on a bilinear least-squares (BLLS) model achieving the second-order advantage and handling multiple calibration standards. A simulated Monte Carlo study of synthetic absorbance-pH data allowed comparison of the newly proposed BLLS methodology with constrained parallel factor analysis (PARAFAC) and with the combination multivariate curve resolution-alternating least-squares (MCR-ALS) technique under different conditions of sample-to-sample pH mismatch and analyte-background ratio. The results indicate an improved prediction ability for the new method. Experimental data generated by measuring absorption spectra of several calibration standards of ascorbic acid and samples of orange juice were subjected to second-order calibration analysis with PARAFAC, MCR-ALS, and the new BLLS method. The results indicate that the latter method provides the best analytical results in regard to analyte recovery in samples of complex composition requiring strict adherence to the second-order advantage. Linear dependencies appear when multivariate data are produced by using the pH or a reaction time as one of the data dimensions, posing a challenge to classical multivariate calibration models. The presently discussed algorithm is useful for these latter systems.

  4. Mapping Fire Scars in the Brazilian Cerrado Using AVHRR Imagery

    NASA Technical Reports Server (NTRS)

    Hlavka, C. A.; Ambrosia, V. G.; Brass, J. A.; Rezendez, A.; Alexander, S.; Guild, L. S.; Peterson, David L. (Technical Monitor)

    1995-01-01

    The Brazilian cerrado, or savanna, spans an area of 1,800,000 square kilometers on the great plateau of Central Brazil. Large fires covering hundreds of square kilometers, frequently occur in wildland areas of the cerrado, dominated by grasslands or grasslands mixed with shrubs and small trees, and also within area in the cerrado used for agricultural purposes, particularly for grazing. Smaller fires, typically extending over arm of a few square kilometers or less, are associated with the clewing of crops, such as dry land rice. A method for mapping fire scars and differentiating them from extensive areas of bare sod with AVHRR bands 1 (.55 -.68 micrometer) and 3 (3.5 - 3.9 micrometers) and measures of performance based on comparison with maps of fires with Landsat imagery will be presented. Methods of estimating total area burned from the AVHRR fire scar map will be discussed and related to land use and scar size.

  5. Least-squares Minimization Approaches to Interpret Total Magnetic Anomalies Due to Spheres

    NASA Astrophysics Data System (ADS)

    Abdelrahman, E. M.; El-Araby, T. M.; Soliman, K. S.; Essa, K. S.; Abo-Ezz, E. R.

    2007-05-01

    We have developed three different least-squares approaches to determine successively: the depth, magnetic angle, and amplitude coefficient of a buried sphere from a total magnetic anomaly. By defining the anomaly value at the origin and the nearest zero-anomaly distance from the origin on the profile, the problem of depth determination is transformed into the problem of finding a solution of a nonlinear equation of the form f(z)=0. Knowing the depth and applying the least-squares method, the magnetic angle and amplitude coefficient are determined using two simple linear equations. In this way, the depth, magnetic angle, and amplitude coefficient are determined individually from all observed total magnetic data. The method is applied to synthetic examples with and without random errors and tested on a field example from Senegal, West Africa. In all cases, the depth solutions are in good agreement with the actual ones.

  6. Interactive application of quadratic expansion of chi-square statistic to nonlinear curve fitting

    NASA Technical Reports Server (NTRS)

    Badavi, F. F.; Everhart, Joel L.

    1987-01-01

    This report contains a detailed theoretical description of an all-purpose, interactive curve-fitting routine that is based on P. R. Bevington's description of the quadratic expansion of the Chi-Square statistic. The method is implemented in the associated interactive, graphics-based computer program. Taylor's expansion of Chi-Square is first introduced, and justifications for retaining only the first term are presented. From the expansion, a set of n simultaneous linear equations is derived, then solved by matrix algebra. A brief description of the code is presented along with a limited number of changes that are required to customize the program of a particular task. To evaluate the performance of the method and the goodness of nonlinear curve fitting, two typical engineering problems are examined and the graphical and tabular output of each is discussed. A complete listing of the entire package is included as an appendix.

  7. Cost-Sharing of Ecological Construction Based on Trapezoidal Intuitionistic Fuzzy Cooperative Games.

    PubMed

    Liu, Jiacai; Zhao, Wenjian

    2016-11-08

    There exist some fuzziness and uncertainty in the process of ecological construction. The aim of this paper is to develop a direct and an effective simplified method for obtaining the cost-sharing scheme when some interested parties form a cooperative coalition to improve the ecological environment of Min River together. Firstly, we propose the solution concept of the least square prenucleolus of cooperative games with coalition values expressed by trapezoidal intuitionistic fuzzy numbers. Then, based on the square of the distance in the numerical value between two trapezoidal intuitionistic fuzzy numbers, we establish a corresponding quadratic programming model to obtain the least square prenucleolus, which can effectively avoid the information distortion and uncertainty enlargement brought about by the subtraction of trapezoidal intuitionistic fuzzy numbers. Finally, we give a numerical example about the cost-sharing of ecological construction in Fujian Province in China to show the validity, applicability, and advantages of the proposed model and method.

  8. Multiple concurrent recursive least squares identification with application to on-line spacecraft mass-property identification

    NASA Technical Reports Server (NTRS)

    Wilson, Edward (Inventor)

    2006-01-01

    The present invention is a method for identifying unknown parameters in a system having a set of governing equations describing its behavior that cannot be put into regression form with the unknown parameters linearly represented. In this method, the vector of unknown parameters is segmented into a plurality of groups where each individual group of unknown parameters may be isolated linearly by manipulation of said equations. Multiple concurrent and independent recursive least squares identification of each said group run, treating other unknown parameters appearing in their regression equation as if they were known perfectly, with said values provided by recursive least squares estimation from the other groups, thereby enabling the use of fast, compact, efficient linear algorithms to solve problems that would otherwise require nonlinear solution approaches. This invention is presented with application to identification of mass and thruster properties for a thruster-controlled spacecraft.

  9. Finite analytic numerical solution of heat transfer and flow past a square channel cavity

    NASA Technical Reports Server (NTRS)

    Chen, C.-J.; Obasih, K.

    1982-01-01

    A numerical solution of flow and heat transfer characteristics is obtained by the finite analytic method for a two dimensional laminar channel flow over a two-dimensional square cavity. The finite analytic method utilizes the local analytic solution in a small element of the problem region to form the algebraic equation relating an interior nodal value with its surrounding nodal values. Stable and rapidly converged solutions were obtained for Reynolds numbers ranging to 1000 and Prandtl number to 10. Streamfunction, vorticity and temperature profiles are solved. Local and mean Nusselt number are given. It is found that the separation streamlines between the cavity and channel flow are concave into the cavity at low Reynolds number and convex at high Reynolds number (Re greater than 100) and for square cavity the mean Nusselt number may be approximately correlated with Peclet number as Nu(m) = 0.365 Pe exp 0.2.

  10. Enhancing Least-Squares Finite Element Methods Through a Quantity-of-Interest

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chaudhry, Jehanzeb Hameed; Cyr, Eric C.; Liu, Kuo

    2014-12-18

    Here, we introduce an approach that augments least-squares finite element formulations with user-specified quantities-of-interest. The method incorporates the quantity-of-interest into the least-squares functional and inherits the global approximation properties of the standard formulation as well as increased resolution of the quantity-of-interest. We establish theoretical properties such as optimality and enhanced convergence under a set of general assumptions. Central to the approach is that it offers an element-level estimate of the error in the quantity-of-interest. As a result, we introduce an adaptive approach that yields efficient, adaptively refined approximations. Several numerical experiments for a range of situations are presented to supportmore » the theory and highlight the effectiveness of our methodology. Notably, the results show that the new approach is effective at improving the accuracy per total computational cost.« less

  11. Weighted partial least squares based on the error and variance of the recovery rate in calibration set.

    PubMed

    Yu, Shaohui; Xiao, Xue; Ding, Hong; Xu, Ge; Li, Haixia; Liu, Jing

    2017-08-05

    The quantitative analysis is very difficult for the emission-excitation fluorescence spectroscopy of multi-component mixtures whose fluorescence peaks are serious overlapping. As an effective method for the quantitative analysis, partial least squares can extract the latent variables from both the independent variables and the dependent variables, so it can model for multiple correlations between variables. However, there are some factors that usually affect the prediction results of partial least squares, such as the noise, the distribution and amount of the samples in calibration set etc. This work focuses on the problems in the calibration set that are mentioned above. Firstly, the outliers in the calibration set are removed by leave-one-out cross-validation. Then, according to two different prediction requirements, the EWPLS method and the VWPLS method are proposed. The independent variables and dependent variables are weighted in the EWPLS method by the maximum error of the recovery rate and weighted in the VWPLS method by the maximum variance of the recovery rate. Three organic matters with serious overlapping excitation-emission fluorescence spectroscopy are selected for the experiments. The step adjustment parameter, the iteration number and the sample amount in the calibration set are discussed. The results show the EWPLS method and the VWPLS method are superior to the PLS method especially for the case of small samples in the calibration set. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. Methods for Improving Information from ’Undesigned’ Human Factors Experiments.

    DTIC Science & Technology

    Human factors engineering, Information processing, Regression analysis , Experimental design, Least squares method, Analysis of variance, Correlation techniques, Matrices(Mathematics), Multiple disciplines, Mathematical prediction

  13. Product Quality Research Institute evaluation of cascade impactor profiles of pharmaceutical aerosols: part 2--evaluation of a method for determining equivalence.

    PubMed

    Christopher, David; Adams, Wallace P; Lee, Douglas S; Morgan, Beth; Pan, Ziqing; Singh, Gur Jai Pal; Tsong, Yi; Lyapustina, Svetlana

    2007-01-19

    The purpose of this article is to present the thought process, methods, and interim results of a PQRI Working Group, which was charged with evaluating the chi-square ratio test as a potential method for determining in vitro equivalence of aerodynamic particle size distribution (APSD) profiles obtained from cascade impactor measurements. Because this test was designed with the intention of being used as a tool in regulatory review of drug applications, the capability of the test to detect differences in APSD profiles correctly and consistently was evaluated in a systematic way across a designed space of possible profiles. To establish a "base line," properties of the test in the simplest case of pairs of identical profiles were studied. Next, the test's performance was studied with pairs of profiles, where some difference was simulated in a systematic way on a single deposition site using realistic product profiles. The results obtained in these studies, which are presented in detail here, suggest that the chi-square ratio test in itself is not sufficient to determine equivalence of particle size distributions. This article, therefore, introduces the proposal to combine the chi-square ratio test with a test for impactor-sized mass based on Population Bioequivalence and describes methods for evaluating discrimination capabilities of the combined test. The approaches and results described in this article elucidate some of the capabilities and limitations of the original chi-square ratio test and provide rationale for development of additional tests capable of comparing APSD profiles of pharmaceutical aerosols.

  14. Estimating current and future streamflow characteristics at ungaged sites, central and eastern Montana, with application to evaluating effects of climate change on fish populations

    USGS Publications Warehouse

    Sando, Roy; Chase, Katherine J.

    2017-03-23

    A common statistical procedure for estimating streamflow statistics at ungaged locations is to develop a relational model between streamflow and drainage basin characteristics at gaged locations using least squares regression analysis; however, least squares regression methods are parametric and make constraining assumptions about the data distribution. The random forest regression method provides an alternative nonparametric method for estimating streamflow characteristics at ungaged sites and requires that the data meet fewer statistical conditions than least squares regression methods.Random forest regression analysis was used to develop predictive models for 89 streamflow characteristics using Precipitation-Runoff Modeling System simulated streamflow data and drainage basin characteristics at 179 sites in central and eastern Montana. The predictive models were developed from streamflow data simulated for current (baseline, water years 1982–99) conditions and three future periods (water years 2021–38, 2046–63, and 2071–88) under three different climate-change scenarios. These predictive models were then used to predict streamflow characteristics for baseline conditions and three future periods at 1,707 fish sampling sites in central and eastern Montana. The average root mean square error for all predictive models was about 50 percent. When streamflow predictions at 23 fish sampling sites were compared to nearby locations with simulated data, the mean relative percent difference was about 43 percent. When predictions were compared to streamflow data recorded at 21 U.S. Geological Survey streamflow-gaging stations outside of the calibration basins, the average mean absolute percent error was about 73 percent.

  15. Prediction Analysis for Measles Epidemics

    NASA Astrophysics Data System (ADS)

    Sumi, Ayako; Ohtomo, Norio; Tanaka, Yukio; Sawamura, Sadashi; Olsen, Lars Folke; Kobayashi, Nobumichi

    2003-12-01

    A newly devised procedure of prediction analysis, which is a linearized version of the nonlinear least squares method combined with the maximum entropy spectral analysis method, was proposed. This method was applied to time series data of measles case notification in several communities in the UK, USA and Denmark. The dominant spectral lines observed in each power spectral density (PSD) can be safely assigned as fundamental periods. The optimum least squares fitting (LSF) curve calculated using these fundamental periods can essentially reproduce the underlying variation of the measles data. An extension of the LSF curve can be used to predict measles case notification quantitatively. Some discussions including a predictability of chaotic time series are presented.

  16. An Efficient Estimator for Moving Target Localization Using Multi-Station Dual-Frequency Radars.

    PubMed

    Huang, Jiyan; Zhang, Ying; Luo, Shan

    2017-12-15

    Localization of a moving target in a dual-frequency radars system has now gained considerable attention. The noncoherent localization approach based on a least squares (LS) estimator has been addressed in the literature. Compared with the LS method, a novel localization method based on a two-step weighted least squares estimator is proposed to increase positioning accuracy for a multi-station dual-frequency radars system in this paper. The effects of signal noise ratio and the number of samples on the performance of range estimation are also analyzed in the paper. Furthermore, both the theoretical variance and Cramer-Rao lower bound (CRLB) are derived. The simulation results verified the proposed method.

  17. An Efficient Estimator for Moving Target Localization Using Multi-Station Dual-Frequency Radars

    PubMed Central

    Zhang, Ying; Luo, Shan

    2017-01-01

    Localization of a moving target in a dual-frequency radars system has now gained considerable attention. The noncoherent localization approach based on a least squares (LS) estimator has been addressed in the literature. Compared with the LS method, a novel localization method based on a two-step weighted least squares estimator is proposed to increase positioning accuracy for a multi-station dual-frequency radars system in this paper. The effects of signal noise ratio and the number of samples on the performance of range estimation are also analyzed in the paper. Furthermore, both the theoretical variance and Cramer–Rao lower bound (CRLB) are derived. The simulation results verified the proposed method. PMID:29244727

  18. Neither fixed nor random: weighted least squares meta-analysis.

    PubMed

    Stanley, T D; Doucouliagos, Hristos

    2015-06-15

    This study challenges two core conventional meta-analysis methods: fixed effect and random effects. We show how and explain why an unrestricted weighted least squares estimator is superior to conventional random-effects meta-analysis when there is publication (or small-sample) bias and better than a fixed-effect weighted average if there is heterogeneity. Statistical theory and simulations of effect sizes, log odds ratios and regression coefficients demonstrate that this unrestricted weighted least squares estimator provides satisfactory estimates and confidence intervals that are comparable to random effects when there is no publication (or small-sample) bias and identical to fixed-effect meta-analysis when there is no heterogeneity. When there is publication selection bias, the unrestricted weighted least squares approach dominates random effects; when there is excess heterogeneity, it is clearly superior to fixed-effect meta-analysis. In practical applications, an unrestricted weighted least squares weighted average will often provide superior estimates to both conventional fixed and random effects. Copyright © 2015 John Wiley & Sons, Ltd.

  19. Characteristics of solid-core square-lattice microstructured optical fibers using an analytical field model

    NASA Astrophysics Data System (ADS)

    Sharma, Dinesh Kumar; Sharma, Anurag; Tripathi, Saurabh Mani

    2017-11-01

    The excellent propagation properties of square-lattice microstructured optical fibers (MOFs) have been widely recognized. We generalized our recently developed analytical field model (Sharma and Sharma, 2016), for index-guiding MOFs with square-lattice of circular air-holes in the photonic crystal cladding. Using the field model, we have studied the propagation properties of the fundamental mode of index-guiding square-lattice MOFs with different hole-to-hole spacing and the air-hole diameter. Results for the modal effective index, near and the far-field patterns and the group-velocity dispersion have been included. The evolution of the mode shape has been investigated in transition from the near to the far-field domain. We have also studied the splice losses between two identical square-lattice MOFs and also between an MOF and a traditional step-index single-mode fiber. Comparisons with available numerical simulation results, e.g., those based on the full-vector finite element method have also been included.

  20. A revisit to contingency table and tests of independence: bootstrap is preferred to Chi-square approximations as well as Fisher's exact test.

    PubMed

    Lin, Jyh-Jiuan; Chang, Ching-Hui; Pal, Nabendu

    2015-01-01

    To test the mutual independence of two qualitative variables (or attributes), it is a common practice to follow the Chi-square tests (Pearson's as well as likelihood ratio test) based on data in the form of a contingency table. However, it should be noted that these popular Chi-square tests are asymptotic in nature and are useful when the cell frequencies are "not too small." In this article, we explore the accuracy of the Chi-square tests through an extensive simulation study and then propose their bootstrap versions that appear to work better than the asymptotic Chi-square tests. The bootstrap tests are useful even for small-cell frequencies as they maintain the nominal level quite accurately. Also, the proposed bootstrap tests are more convenient than the Fisher's exact test which is often criticized for being too conservative. Finally, all test methods are applied to a few real-life datasets for demonstration purposes.

  1. Square-lashing technique in segmental spinal instrumentation: a biomechanical study.

    PubMed

    Arlet, Vincent; Draxinger, Kevin; Beckman, Lorne; Steffen, Thomas

    2006-07-01

    Sublaminar wires have been used for many years for segmental spinal instrumentation in scoliosis surgery. More recently, stainless steel wires have been replaced by titanium cables. However, in rigid scoliotic curves, sublaminar wires or simple cables can either brake or pull out. The square-lashing technique was devised to avoid complications such as cable breakage or lamina cutout. The purpose of the study was therefore to test biomechanically the pull out and failure mode of simple sublaminar constructs versus the square-lashing technique. Individual vertebrae were subjected to pullout testing having one of two different constructs (single loop and square lashing) using either monofilament wire or multifilament cables. Four different methods of fixation were therefore tested: single wire construct, square-lashing wiring construct, single cable construct, and square-lashing cable construct. Ultimate failure load and failure mechanism were recorded. For the single wire the construct failed 12/16 times by wire breakage with an average ultimate failure load of 793 N. For the square-lashing wire the construct failed with pedicle fracture in 14/16, one bilateral lamina fracture, and one wire breakage. Ultimate failure load average was 1,239 N For the single cable the construct failed 12/16 times due to cable breakage (average force 1,162 N). 10/12 of these breakages were where the cable looped over the rod. For the square-lashing cable all of these constructs (16/16) failed by fracture of the pedicle with an average ultimate failure load of 1,388 N. The square-lashing construct had a higher pullout strength than the single loop and almost no cutting out from the lamina. The square-lashing technique with cables may therefore represent a new advance in segmental spinal instrumentation.

  2. Validating Clusters with the Lower Bound for Sum-of-Squares Error

    ERIC Educational Resources Information Center

    Steinley, Douglas

    2007-01-01

    Given that a minor condition holds (e.g., the number of variables is greater than the number of clusters), a nontrivial lower bound for the sum-of-squares error criterion in K-means clustering is derived. By calculating the lower bound for several different situations, a method is developed to determine the adequacy of cluster solution based on…

  3. Using Technology to Optimize and Generalize: The Least-Squares Line

    ERIC Educational Resources Information Center

    Burke, Maurice J.; Hodgson, Ted R.

    2007-01-01

    With the help of technology and a basic high school algebra method for finding the vertex of a quadratic polynomial, students can develop and prove the formula for least-squares lines. Students are exposed to the power of a computer algebra system to generalize processes they understand and to see deeper patterns in those processes. (Contains 4…

  4. Least Squares Computations in Science and Engineering

    DTIC Science & Technology

    1994-02-01

    iterative least squares deblurring procedure. Because of the ill-posed characteristics of the deconvolution problem, in the presence of noise , direct...optimization methods. Generally, the problems are accompanied by constraints, such as bound constraints, and the observations are corrupted by noise . The...engineering. This effort has involved interaction with researchers in closed-loop active noise (vibration) control at Phillips Air Force Laboratory

  5. Penrose high-dynamic-range imaging

    NASA Astrophysics Data System (ADS)

    Li, Jia; Bai, Chenyan; Lin, Zhouchen; Yu, Jian

    2016-05-01

    High-dynamic-range (HDR) imaging is becoming increasingly popular and widespread. The most common multishot HDR approach, based on multiple low-dynamic-range images captured with different exposures, has difficulties in handling camera and object movements. The spatially varying exposures (SVE) technology provides a solution to overcome this limitation by obtaining multiple exposures of the scene in only one shot but suffers from a loss in spatial resolution of the captured image. While aperiodic assignment of exposures has been shown to be advantageous during reconstruction in alleviating resolution loss, almost all the existing imaging sensors use the square pixel layout, which is a periodic tiling of square pixels. We propose the Penrose pixel layout, using pixels in aperiodic rhombus Penrose tiling, for HDR imaging. With the SVE technology, Penrose pixel layout has both exposure and pixel aperiodicities. To investigate its performance, we have to reconstruct HDR images in square pixel layout from Penrose raw images with SVE. Since the two pixel layouts are different, the traditional HDR reconstruction methods are not applicable. We develop a reconstruction method for Penrose pixel layout using a Gaussian mixture model for regularization. Both quantitative and qualitative results show the superiority of Penrose pixel layout over square pixel layout.

  6. Amplitude-cyclic frequency decomposition of vibration signals for bearing fault diagnosis based on phase editing

    NASA Astrophysics Data System (ADS)

    Barbini, L.; Eltabach, M.; Hillis, A. J.; du Bois, J. L.

    2018-03-01

    In rotating machine diagnosis different spectral tools are used to analyse vibration signals. Despite the good diagnostic performance such tools are usually refined, computationally complex to implement and require oversight of an expert user. This paper introduces an intuitive and easy to implement method for vibration analysis: amplitude cyclic frequency decomposition. This method firstly separates vibration signals accordingly to their spectral amplitudes and secondly uses the squared envelope spectrum to reveal the presence of cyclostationarity in each amplitude level. The intuitive idea is that in a rotating machine different components contribute vibrations at different amplitudes, for instance defective bearings contribute a very weak signal in contrast to gears. This paper also introduces a new quantity, the decomposition squared envelope spectrum, which enables separation between the components of a rotating machine. The amplitude cyclic frequency decomposition and the decomposition squared envelope spectrum are tested on real word signals, both at stationary and varying speeds, using data from a wind turbine gearbox and an aircraft engine. In addition a benchmark comparison to the spectral correlation method is presented.

  7. Time-domain least-squares migration using the Gaussian beam summation method

    NASA Astrophysics Data System (ADS)

    Yang, Jidong; Zhu, Hejun; McMechan, George; Yue, Yubo

    2018-04-01

    With a finite recording aperture, a limited source spectrum and unbalanced illumination, traditional imaging methods are insufficient to generate satisfactory depth profiles with high resolution and high amplitude fidelity. This is because traditional migration uses the adjoint operator of the forward modeling rather than the inverse operator. We propose a least-squares migration approach based on the time-domain Gaussian beam summation, which helps to balance subsurface illumination and improve image resolution. Based on the Born approximation for the isotropic acoustic wave equation, we derive a linear time-domain Gaussian beam modeling operator, which significantly reduces computational costs in comparison with the spectral method. Then, we formulate the corresponding adjoint Gaussian beam migration, as the gradient of an L2-norm waveform misfit function. An L1-norm regularization is introduced to the inversion to enhance the robustness of least-squares migration, and an approximated diagonal Hessian is used as a preconditioner to speed convergence. Synthetic and field data examples demonstrate that the proposed approach improves imaging resolution and amplitude fidelity in comparison with traditional Gaussian beam migration.

  8. Time-domain least-squares migration using the Gaussian beam summation method

    NASA Astrophysics Data System (ADS)

    Yang, Jidong; Zhu, Hejun; McMechan, George; Yue, Yubo

    2018-07-01

    With a finite recording aperture, a limited source spectrum and unbalanced illumination, traditional imaging methods are insufficient to generate satisfactory depth profiles with high resolution and high amplitude fidelity. This is because traditional migration uses the adjoint operator of the forward modelling rather than the inverse operator. We propose a least-squares migration approach based on the time-domain Gaussian beam summation, which helps to balance subsurface illumination and improve image resolution. Based on the Born approximation for the isotropic acoustic wave equation, we derive a linear time-domain Gaussian beam modelling operator, which significantly reduces computational costs in comparison with the spectral method. Then, we formulate the corresponding adjoint Gaussian beam migration, as the gradient of an L2-norm waveform misfit function. An L1-norm regularization is introduced to the inversion to enhance the robustness of least-squares migration, and an approximated diagonal Hessian is used as a pre-conditioner to speed convergence. Synthetic and field data examples demonstrate that the proposed approach improves imaging resolution and amplitude fidelity in comparison with traditional Gaussian beam migration.

  9. Application of Variational Methods to the Thermal Entrance Region of Ducts

    NASA Technical Reports Server (NTRS)

    Sparrow, E. M.; Siegel. R.

    1960-01-01

    A variational method is presented for solving eigenvalue problems which arise in connection with the analysis of convective heat transfer in the thermal entrance region of ducts. Consideration is given, to both situations where the temperature profile depends upon one cross-sectional coordinate (e.g. circular tube) or upon two cross-sectional coordinates (e.g. rectangular duct). The variational method is illustrated and verified by application to laminar heat transfer in a circular tube and a parallel-plate channel, and good agreement with existing numerical solutions is attained. Then, application is made to laminar heat transfer in a square duct as a check, an alternate computation for the square duct is made using a method indicated by Misaps and Pohihausen. The variational method can, in principle, also be applied to problems in turbulent heat transfer.

  10. The theory precision analyse of RFM localization of satellite remote sensing imagery

    NASA Astrophysics Data System (ADS)

    Zhang, Jianqing; Xv, Biao

    2009-11-01

    The tradition method of detecting precision of Rational Function Model(RFM) is to make use of a great deal check points, and it calculates mean square error through comparing calculational coordinate with known coordinate. This method is from theory of probability, through a large number of samples to statistic estimate value of mean square error, we can think its estimate value approaches in its true when samples are well enough. This paper is from angle of survey adjustment, take law of propagation of error as the theory basis, and it calculates theory precision of RFM localization. Then take the SPOT5 three array imagery as experiment data, and the result of traditional method and narrated method in the paper are compared, while has confirmed tradition method feasible, and answered its theory precision question from the angle of survey adjustment.

  11. PRIM: An Efficient Preconditioning Iterative Reweighted Least Squares Method for Parallel Brain MRI Reconstruction.

    PubMed

    Xu, Zheng; Wang, Sheng; Li, Yeqing; Zhu, Feiyun; Huang, Junzhou

    2018-02-08

    The most recent history of parallel Magnetic Resonance Imaging (pMRI) has in large part been devoted to finding ways to reduce acquisition time. While joint total variation (JTV) regularized model has been demonstrated as a powerful tool in increasing sampling speed for pMRI, however, the major bottleneck is the inefficiency of the optimization method. While all present state-of-the-art optimizations for the JTV model could only reach a sublinear convergence rate, in this paper, we squeeze the performance by proposing a linear-convergent optimization method for the JTV model. The proposed method is based on the Iterative Reweighted Least Squares algorithm. Due to the complexity of the tangled JTV objective, we design a novel preconditioner to further accelerate the proposed method. Extensive experiments demonstrate the superior performance of the proposed algorithm for pMRI regarding both accuracy and efficiency compared with state-of-the-art methods.

  12. Computational Issues in Damping Identification for Large Scale Problems

    NASA Technical Reports Server (NTRS)

    Pilkey, Deborah L.; Roe, Kevin P.; Inman, Daniel J.

    1997-01-01

    Two damping identification methods are tested for efficiency in large-scale applications. One is an iterative routine, and the other a least squares method. Numerical simulations have been performed on multiple degree-of-freedom models to test the effectiveness of the algorithm and the usefulness of parallel computation for the problems. High Performance Fortran is used to parallelize the algorithm. Tests were performed using the IBM-SP2 at NASA Ames Research Center. The least squares method tested incurs high communication costs, which reduces the benefit of high performance computing. This method's memory requirement grows at a very rapid rate meaning that larger problems can quickly exceed available computer memory. The iterative method's memory requirement grows at a much slower pace and is able to handle problems with 500+ degrees of freedom on a single processor. This method benefits from parallelization, and significant speedup can he seen for problems of 100+ degrees-of-freedom.

  13. Hypothesis Testing Using Factor Score Regression

    PubMed Central

    Devlieger, Ines; Mayer, Axel; Rosseel, Yves

    2015-01-01

    In this article, an overview is given of four methods to perform factor score regression (FSR), namely regression FSR, Bartlett FSR, the bias avoiding method of Skrondal and Laake, and the bias correcting method of Croon. The bias correcting method is extended to include a reliable standard error. The four methods are compared with each other and with structural equation modeling (SEM) by using analytic calculations and two Monte Carlo simulation studies to examine their finite sample characteristics. Several performance criteria are used, such as the bias using the unstandardized and standardized parameterization, efficiency, mean square error, standard error bias, type I error rate, and power. The results show that the bias correcting method, with the newly developed standard error, is the only suitable alternative for SEM. While it has a higher standard error bias than SEM, it has a comparable bias, efficiency, mean square error, power, and type I error rate. PMID:29795886

  14. Quantitative Modelling of Trace Elements in Hard Coal.

    PubMed

    Smoliński, Adam; Howaniec, Natalia

    2016-01-01

    The significance of coal in the world economy remains unquestionable for decades. It is also expected to be the dominant fossil fuel in the foreseeable future. The increased awareness of sustainable development reflected in the relevant regulations implies, however, the need for the development and implementation of clean coal technologies on the one hand, and adequate analytical tools on the other. The paper presents the application of the quantitative Partial Least Squares method in modeling the concentrations of trace elements (As, Ba, Cd, Co, Cr, Cu, Mn, Ni, Pb, Rb, Sr, V and Zn) in hard coal based on the physical and chemical parameters of coal, and coal ash components. The study was focused on trace elements potentially hazardous to the environment when emitted from coal processing systems. The studied data included 24 parameters determined for 132 coal samples provided by 17 coal mines of the Upper Silesian Coal Basin, Poland. Since the data set contained outliers, the construction of robust Partial Least Squares models for contaminated data set and the correct identification of outlying objects based on the robust scales were required. These enabled the development of the correct Partial Least Squares models, characterized by good fit and prediction abilities. The root mean square error was below 10% for all except for one the final Partial Least Squares models constructed, and the prediction error (root mean square error of cross-validation) exceeded 10% only for three models constructed. The study is of both cognitive and applicative importance. It presents the unique application of the chemometric methods of data exploration in modeling the content of trace elements in coal. In this way it contributes to the development of useful tools of coal quality assessment.

  15. Quantitative Modelling of Trace Elements in Hard Coal

    PubMed Central

    Smoliński, Adam; Howaniec, Natalia

    2016-01-01

    The significance of coal in the world economy remains unquestionable for decades. It is also expected to be the dominant fossil fuel in the foreseeable future. The increased awareness of sustainable development reflected in the relevant regulations implies, however, the need for the development and implementation of clean coal technologies on the one hand, and adequate analytical tools on the other. The paper presents the application of the quantitative Partial Least Squares method in modeling the concentrations of trace elements (As, Ba, Cd, Co, Cr, Cu, Mn, Ni, Pb, Rb, Sr, V and Zn) in hard coal based on the physical and chemical parameters of coal, and coal ash components. The study was focused on trace elements potentially hazardous to the environment when emitted from coal processing systems. The studied data included 24 parameters determined for 132 coal samples provided by 17 coal mines of the Upper Silesian Coal Basin, Poland. Since the data set contained outliers, the construction of robust Partial Least Squares models for contaminated data set and the correct identification of outlying objects based on the robust scales were required. These enabled the development of the correct Partial Least Squares models, characterized by good fit and prediction abilities. The root mean square error was below 10% for all except for one the final Partial Least Squares models constructed, and the prediction error (root mean square error of cross–validation) exceeded 10% only for three models constructed. The study is of both cognitive and applicative importance. It presents the unique application of the chemometric methods of data exploration in modeling the content of trace elements in coal. In this way it contributes to the development of useful tools of coal quality assessment. PMID:27438794

  16. Selective Weighted Least Squares Method for Fourier Transform Infrared Quantitative Analysis.

    PubMed

    Wang, Xin; Li, Yan; Wei, Haoyun; Chen, Xia

    2017-06-01

    Classical least squares (CLS) regression is a popular multivariate statistical method used frequently for quantitative analysis using Fourier transform infrared (FT-IR) spectrometry. Classical least squares provides the best unbiased estimator for uncorrelated residual errors with zero mean and equal variance. However, the noise in FT-IR spectra, which accounts for a large portion of the residual errors, is heteroscedastic. Thus, if this noise with zero mean dominates in the residual errors, the weighted least squares (WLS) regression method described in this paper is a better estimator than CLS. However, if bias errors, such as the residual baseline error, are significant, WLS may perform worse than CLS. In this paper, we compare the effect of noise and bias error in using CLS and WLS in quantitative analysis. Results indicated that for wavenumbers with low absorbance, the bias error significantly affected the error, such that the performance of CLS is better than that of WLS. However, for wavenumbers with high absorbance, the noise significantly affected the error, and WLS proves to be better than CLS. Thus, we propose a selective weighted least squares (SWLS) regression that processes data with different wavenumbers using either CLS or WLS based on a selection criterion, i.e., lower or higher than an absorbance threshold. The effects of various factors on the optimal threshold value (OTV) for SWLS have been studied through numerical simulations. These studies reported that: (1) the concentration and the analyte type had minimal effect on OTV; and (2) the major factor that influences OTV is the ratio between the bias error and the standard deviation of the noise. The last part of this paper is dedicated to quantitative analysis of methane gas spectra, and methane/toluene mixtures gas spectra as measured using FT-IR spectrometry and CLS, WLS, and SWLS. The standard error of prediction (SEP), bias of prediction (bias), and the residual sum of squares of the errors (RSS) from the three quantitative analyses were compared. In methane gas analysis, SWLS yielded the lowest SEP and RSS among the three methods. In methane/toluene mixture gas analysis, a modification of the SWLS has been presented to tackle the bias error from other components. The SWLS without modification presents the lowest SEP in all cases but not bias and RSS. The modification of SWLS reduced the bias, which showed a lower RSS than CLS, especially for small components.

  17. Retrieval of the non-depolarizing components of depolarizing Mueller matrices by using symmetry conditions and least squares minimization

    NASA Astrophysics Data System (ADS)

    Kuntman, Ertan; Canillas, Adolf; Arteaga, Oriol

    2017-11-01

    Experimental Mueller matrices contain certain amount of uncertainty in their elements and these uncertainties can create difficulties for decomposition methods based on analytic solutions. In an earlier paper [1], we proposed a decomposition method for depolarizing Mueller matrices by using certain symmetry conditions. However, because of the experimental error, that method creates over-determined systems with non-unique solutions. Here we propose to use least squares minimization approach in order to improve the accuracy of our results. In this method, we are taking into account the number of independent parameters of the corresponding symmetry and the rank constraints on the component matrices to decide on our fitting model. This approach is illustrated with experimental Mueller matrices that include material media with different Mueller symmetries.

  18. Hybrid least squares multivariate spectral analysis methods

    DOEpatents

    Haaland, David M.

    2004-03-23

    A set of hybrid least squares multivariate spectral analysis methods in which spectral shapes of components or effects not present in the original calibration step are added in a following prediction or calibration step to improve the accuracy of the estimation of the amount of the original components in the sampled mixture. The hybrid method herein means a combination of an initial calibration step with subsequent analysis by an inverse multivariate analysis method. A spectral shape herein means normally the spectral shape of a non-calibrated chemical component in the sample mixture but can also mean the spectral shapes of other sources of spectral variation, including temperature drift, shifts between spectrometers, spectrometer drift, etc. The shape can be continuous, discontinuous, or even discrete points illustrative of the particular effect.

  19. A method for the selection of a functional form for a thermodynamic equation of state using weighted linear least squares stepwise regression

    NASA Technical Reports Server (NTRS)

    Jacobsen, R. T.; Stewart, R. B.; Crain, R. W., Jr.; Rose, G. L.; Myers, A. F.

    1976-01-01

    A method was developed for establishing a rational choice of the terms to be included in an equation of state with a large number of adjustable coefficients. The methods presented were developed for use in the determination of an equation of state for oxygen and nitrogen. However, a general application of the methods is possible in studies involving the determination of an optimum polynomial equation for fitting a large number of data points. The data considered in the least squares problem are experimental thermodynamic pressure-density-temperature data. Attention is given to a description of stepwise multiple regression and the use of stepwise regression in the determination of an equation of state for oxygen and nitrogen.

  20. Detecting outliers when fitting data with nonlinear regression – a new method based on robust nonlinear regression and the false discovery rate

    PubMed Central

    Motulsky, Harvey J; Brown, Ronald E

    2006-01-01

    Background Nonlinear regression, like linear regression, assumes that the scatter of data around the ideal curve follows a Gaussian or normal distribution. This assumption leads to the familiar goal of regression: to minimize the sum of the squares of the vertical or Y-value distances between the points and the curve. Outliers can dominate the sum-of-the-squares calculation, and lead to misleading results. However, we know of no practical method for routinely identifying outliers when fitting curves with nonlinear regression. Results We describe a new method for identifying outliers when fitting data with nonlinear regression. We first fit the data using a robust form of nonlinear regression, based on the assumption that scatter follows a Lorentzian distribution. We devised a new adaptive method that gradually becomes more robust as the method proceeds. To define outliers, we adapted the false discovery rate approach to handling multiple comparisons. We then remove the outliers, and analyze the data using ordinary least-squares regression. Because the method combines robust regression and outlier removal, we call it the ROUT method. When analyzing simulated data, where all scatter is Gaussian, our method detects (falsely) one or more outlier in only about 1–3% of experiments. When analyzing data contaminated with one or several outliers, the ROUT method performs well at outlier identification, with an average False Discovery Rate less than 1%. Conclusion Our method, which combines a new method of robust nonlinear regression with a new method of outlier identification, identifies outliers from nonlinear curve fits with reasonable power and few false positives. PMID:16526949

  1. The comparison of robust partial least squares regression with robust principal component regression on a real

    NASA Astrophysics Data System (ADS)

    Polat, Esra; Gunay, Suleyman

    2013-10-01

    One of the problems encountered in Multiple Linear Regression (MLR) is multicollinearity, which causes the overestimation of the regression parameters and increase of the variance of these parameters. Hence, in case of multicollinearity presents, biased estimation procedures such as classical Principal Component Regression (CPCR) and Partial Least Squares Regression (PLSR) are then performed. SIMPLS algorithm is the leading PLSR algorithm because of its speed, efficiency and results are easier to interpret. However, both of the CPCR and SIMPLS yield very unreliable results when the data set contains outlying observations. Therefore, Hubert and Vanden Branden (2003) have been presented a robust PCR (RPCR) method and a robust PLSR (RPLSR) method called RSIMPLS. In RPCR, firstly, a robust Principal Component Analysis (PCA) method for high-dimensional data on the independent variables is applied, then, the dependent variables are regressed on the scores using a robust regression method. RSIMPLS has been constructed from a robust covariance matrix for high-dimensional data and robust linear regression. The purpose of this study is to show the usage of RPCR and RSIMPLS methods on an econometric data set, hence, making a comparison of two methods on an inflation model of Turkey. The considered methods have been compared in terms of predictive ability and goodness of fit by using a robust Root Mean Squared Error of Cross-validation (R-RMSECV), a robust R2 value and Robust Component Selection (RCS) statistic.

  2. An improved conjugate gradient scheme to the solution of least squares SVM.

    PubMed

    Chu, Wei; Ong, Chong Jin; Keerthi, S Sathiya

    2005-03-01

    The least square support vector machines (LS-SVM) formulation corresponds to the solution of a linear system of equations. Several approaches to its numerical solutions have been proposed in the literature. In this letter, we propose an improved method to the numerical solution of LS-SVM and show that the problem can be solved using one reduced system of linear equations. Compared with the existing algorithm for LS-SVM, the approach used in this letter is about twice as efficient. Numerical results using the proposed method are provided for comparisons with other existing algorithms.

  3. Comment on ‘A novel method for fast and robust estimation of fluorescence decay dynamics using constrained least-square deconvolution with Laguerre expansion’

    NASA Astrophysics Data System (ADS)

    Zhang, Yongliang; Day-Uei Li, David

    2017-02-01

    This comment is to clarify that Poisson noise instead of Gaussian noise shall be included to assess the performances of least-squares deconvolution with Laguerre expansion (LSD-LE) for analysing fluorescence lifetime imaging data obtained from time-resolved systems. Moreover, we also corrected an equation in the paper. As the LSD-LE method is rapid and has the potential to be widely applied not only for diagnostic but for wider bioimaging applications, it is desirable to have precise noise models and equations.

  4. Chi-Squared Test of Fit and Sample Size-A Comparison between a Random Sample Approach and a Chi-Square Value Adjustment Method.

    PubMed

    Bergh, Daniel

    2015-01-01

    Chi-square statistics are commonly used for tests of fit of measurement models. Chi-square is also sensitive to sample size, which is why several approaches to handle large samples in test of fit analysis have been developed. One strategy to handle the sample size problem may be to adjust the sample size in the analysis of fit. An alternative is to adopt a random sample approach. The purpose of this study was to analyze and to compare these two strategies using simulated data. Given an original sample size of 21,000, for reductions of sample sizes down to the order of 5,000 the adjusted sample size function works as good as the random sample approach. In contrast, when applying adjustments to sample sizes of lower order the adjustment function is less effective at approximating the chi-square value for an actual random sample of the relevant size. Hence, the fit is exaggerated and misfit under-estimated using the adjusted sample size function. Although there are big differences in chi-square values between the two approaches at lower sample sizes, the inferences based on the p-values may be the same.

  5. Spacer-Directed Selective Assembly of Copper Square or Hexagon and Ring-Stacks or Coordination Nanotubes.

    PubMed

    Wu, Xialu; Ding, Nini; Zhang, Wenhua; Xue, Fei; Hor, T S Andy

    2015-07-20

    The use of simple self-assembly methods to direct or engineer porosity or channels of desirable functionality is a major challenge in the field of metal-organic frameworks. We herein report a series of frameworks by modifying square ring structure of [{Cu2(5-dmpy)2(L1)2(H2O)(MeOH)}2{ClO4}4]·4MeOH (1·4MeOH, 5-dmpy = 5,5'-dimethyl-2,2'-bipyridine, HL1 = 4-pyridinecarboxylic acid). Use of pyridyl carboxylates as directional spacers in bipyridyl chelated Cu(II) system led to the growth of square unit into other configurations, namely, square ring, square chain, and square tunnel. Another remarkable characteristic is that the novel use of two isomers of pyridinyl-acrylic acid directs selectively to two different extreme tubular forms-aligned stacking of discrete hexagonal rings and crack-free one-dimensional continuum polymers. This provides a unique example of two extreme forms of copper nanotubes from two isomeric spacers. All of the reactions are performed in a one-pot self-assembly process at room temperature, while the topological selectivity is exclusively determined by the skeletal characteristics of the spacers.

  6. Computing daily mean streamflow at ungaged locations in Iowa by using the Flow Anywhere and Flow Duration Curve Transfer statistical methods

    USGS Publications Warehouse

    Linhart, S. Mike; Nania, Jon F.; Sanders, Curtis L.; Archfield, Stacey A.

    2012-01-01

    The U.S. Geological Survey (USGS) maintains approximately 148 real-time streamgages in Iowa for which daily mean streamflow information is available, but daily mean streamflow data commonly are needed at locations where no streamgages are present. Therefore, the USGS conducted a study as part of a larger project in cooperation with the Iowa Department of Natural Resources to develop methods to estimate daily mean streamflow at locations in ungaged watersheds in Iowa by using two regression-based statistical methods. The regression equations for the statistical methods were developed from historical daily mean streamflow and basin characteristics from streamgages within the study area, which includes the entire State of Iowa and adjacent areas within a 50-mile buffer of Iowa in neighboring states. Results of this study can be used with other techniques to determine the best method for application in Iowa and can be used to produce a Web-based geographic information system tool to compute streamflow estimates automatically. The Flow Anywhere statistical method is a variation of the drainage-area-ratio method, which transfers same-day streamflow information from a reference streamgage to another location by using the daily mean streamflow at the reference streamgage and the drainage-area ratio of the two locations. The Flow Anywhere method modifies the drainage-area-ratio method in order to regionalize the equations for Iowa and determine the best reference streamgage from which to transfer same-day streamflow information to an ungaged location. Data used for the Flow Anywhere method were retrieved for 123 continuous-record streamgages located in Iowa and within a 50-mile buffer of Iowa. The final regression equations were computed by using either left-censored regression techniques with a low limit threshold set at 0.1 cubic feet per second (ft3/s) and the daily mean streamflow for the 15th day of every other month, or by using an ordinary-least-squares multiple linear regression method and the daily mean streamflow for the 15th day of every other month. The Flow Duration Curve Transfer method was used to estimate unregulated daily mean streamflow from the physical and climatic characteristics of gaged basins. For the Flow Duration Curve Transfer method, daily mean streamflow quantiles at the ungaged site were estimated with the parameter-based regression model, which results in a continuous daily flow-duration curve (the relation between exceedance probability and streamflow for each day of observed streamflow) at the ungaged site. By the use of a reference streamgage, the Flow Duration Curve Transfer is converted to a time series. Data used in the Flow Duration Curve Transfer method were retrieved for 113 continuous-record streamgages in Iowa and within a 50-mile buffer of Iowa. The final statewide regression equations for Iowa were computed by using a weighted-least-squares multiple linear regression method and were computed for the 0.01-, 0.05-, 0.10-, 0.15-, 0.20-, 0.30-, 0.40-, 0.50-, 0.60-, 0.70-, 0.80-, 0.85-, 0.90-, and 0.95-exceedance probability statistics determined from the daily mean streamflow with a reporting limit set at 0.1 ft3/s. The final statewide regression equation for Iowa computed by using left-censored regression techniques was computed for the 0.99-exceedance probability statistic determined from the daily mean streamflow with a low limit threshold and a reporting limit set at 0.1 ft3/s. For the Flow Anywhere method, results of the validation study conducted by using six streamgages show that differences between the root-mean-square error and the mean absolute error ranged from 1,016 to 138 ft3/s, with the larger value signifying a greater occurrence of outliers between observed and estimated streamflows. Root-mean-square-error values ranged from 1,690 to 237 ft3/s. Values of the percent root-mean-square error ranged from 115 percent to 26.2 percent. The logarithm (base 10) streamflow percent root-mean-square error ranged from 13.0 to 5.3 percent. Root-mean-square-error observations standard-deviation-ratio values ranged from 0.80 to 0.40. Percent-bias values ranged from 25.4 to 4.0 percent. Untransformed streamflow Nash-Sutcliffe efficiency values ranged from 0.84 to 0.35. The logarithm (base 10) streamflow Nash-Sutcliffe efficiency values ranged from 0.86 to 0.56. For the streamgage with the best agreement between observed and estimated streamflow, higher streamflows appear to be underestimated. For the streamgage with the worst agreement between observed and estimated streamflow, low flows appear to be overestimated whereas higher flows seem to be underestimated. Estimated cumulative streamflows for the period October 1, 2004, to September 30, 2009, are underestimated by -25.8 and -7.4 percent for the closest and poorest comparisons, respectively. For the Flow Duration Curve Transfer method, results of the validation study conducted by using the same six streamgages show that differences between the root-mean-square error and the mean absolute error ranged from 437 to 93.9 ft3/s, with the larger value signifying a greater occurrence of outliers between observed and estimated streamflows. Root-mean-square-error values ranged from 906 to 169 ft3/s. Values of the percent root-mean-square-error ranged from 67.0 to 25.6 percent. The logarithm (base 10) streamflow percent root-mean-square error ranged from 12.5 to 4.4 percent. Root-mean-square-error observations standard-deviation-ratio values ranged from 0.79 to 0.40. Percent-bias values ranged from 22.7 to 0.94 percent. Untransformed streamflow Nash-Sutcliffe efficiency values ranged from 0.84 to 0.38. The logarithm (base 10) streamflow Nash-Sutcliffe efficiency values ranged from 0.89 to 0.48. For the streamgage with the closest agreement between observed and estimated streamflow, there is relatively good agreement between observed and estimated streamflows. For the streamgage with the poorest agreement between observed and estimated streamflow, streamflows appear to be substantially underestimated for much of the time period. Estimated cumulative streamflow for the period October 1, 2004, to September 30, 2009, are underestimated by -9.3 and -22.7 percent for the closest and poorest comparisons, respectively.

  7. An on-line modified least-mean-square algorithm for training neurofuzzy controllers.

    PubMed

    Tan, Woei Wan

    2007-04-01

    The problem hindering the use of data-driven modelling methods for training controllers on-line is the lack of control over the amount by which the plant is excited. As the operating schedule determines the information available on-line, the knowledge of the process may degrade if the setpoint remains constant for an extended period. This paper proposes an identification algorithm that alleviates "learning interference" by incorporating fuzzy theory into the normalized least-mean-square update rule. The ability of the proposed methodology to achieve faster learning is examined by employing the algorithm to train a neurofuzzy feedforward controller for controlling a liquid level process. Since the proposed identification strategy has similarities with the normalized least-mean-square update rule and the recursive least-square estimator, the on-line learning rates of these algorithms are also compared.

  8. Fast and nondestructive determination of protein content in rapeseeds (Brassica napus L.) using Fourier transform infrared photoacoustic spectroscopy (FTIR-PAS).

    PubMed

    Lu, Yuzhen; Du, Changwen; Yu, Changbing; Zhou, Jianmin

    2014-08-01

    Fast and non-destructive determination of rapeseed protein content carries significant implications in rapeseed production. This study presented the first attempt of using Fourier transform mid-infrared photoacoustic spectroscopy (FTIR-PAS) to quantify protein content of rapeseed. The full-spectrum model was first built using partial least squares (PLS). Interval selection methods including interval partial least squares (iPLS), synergy interval partial least squares (siPLS), backward elimination interval partial least squares (biPLS) and dynamic backward elimination interval partial least squares (dyn-biPLS) were then employed to select the relevant band or band combination for PLS modeling. The full-spectrum PLS model achieved an ratio of prediction to deviation (RPD) of 2.047. In comparison, all interval selection methods produced better results than full-spectrum modeling. siPLS achieved the best predictive accuracy with an RPD of 3.215 when the spectrum was sectioned into 25 intervals, and two intervals (1198-1335 and 1614-1753 cm(-1) ) were selected. iPLS excelled biPLS and dyn-biPLS, and dyn-biPLS performed slightly better than biPLS. FTIR-PAS was verified as a promising analytical tool to quantify rapeseed protein content. Interval selection could extract the relevant individual band or synergy band associated with the sample constituent of interest, and then improve the prediction accuracy of the full-spectrum model. © 2013 Society of Chemical Industry.

  9. Effects of Linking Methods on Detection of DIF.

    ERIC Educational Resources Information Center

    Kim, Seock-Ho; Cohen, Allan S.

    1992-01-01

    Effects of the following methods for linking metrics on detection of differential item functioning (DIF) were compared: (1) test characteristic curve method (TCC); (2) weighted mean and sigma method; and (3) minimum chi-square method. With large samples, results were essentially the same. With small samples, TCC was most accurate. (SLD)

  10. The application of continuous wavelet transform and least squares support vector machine for the simultaneous quantitative spectrophotometric determination of Myricetin, Kaempferol and Quercetin as flavonoids in pharmaceutical plants

    NASA Astrophysics Data System (ADS)

    Sohrabi, Mahmoud Reza; Darabi, Golnaz

    2016-01-01

    Flavonoids are γ-benzopyrone derivatives, which are highly regarded in these researchers for their antioxidant property. In this study, two new signals processing methods been coupled with UV spectroscopy for spectral resolution and simultaneous quantitative determination of Myricetin, Kaempferol and Quercetin as flavonoids in Laurel, St. John's Wort and Green Tea without the need for any previous separation procedure. The developed methods are continuous wavelet transform (CWT) and least squares support vector machine (LS-SVM) methods integrated with UV spectroscopy individually. Different wavelet families were tested by CWT method and finally the Daubechies wavelet family (Db4) for Myricetin and the Gaussian wavelet families for Kaempferol (Gaus3) and Quercetin (Gaus7) were selected and applied for simultaneous analysis under the optimal conditions. The LS-SVM was applied to build the flavonoids prediction model based on absorption spectra. The root mean square errors for prediction (RMSEP) of Myricetin, Kaempferol and Quercetin were 0.0552, 0.0275 and 0.0374, respectively. The developed methods were validated by the analysis of the various synthetic mixtures associated with a well- known flavonoid contents. Mean recovery values of Myricetin, Kaempferol and Quercetin, in CWT method were 100.123, 100.253, 100.439 and in LS-SVM method were 99.94, 99.81 and 99.682, respectively. The results achieved by analyzing the real samples from the CWT and LS-SVM methods were compared to the HPLC reference method and the results were very close to the reference method. Meanwhile, the obtained results of the one-way ANOVA (analysis of variance) test revealed that there was no significant difference between the suggested methods.

  11. The application of continuous wavelet transform and least squares support vector machine for the simultaneous quantitative spectrophotometric determination of Myricetin, Kaempferol and Quercetin as flavonoids in pharmaceutical plants.

    PubMed

    Sohrabi, Mahmoud Reza; Darabi, Golnaz

    2016-01-05

    Flavonoids are γ-benzopyrone derivatives, which are highly regarded in these researchers for their antioxidant property. In this study, two new signals processing methods been coupled with UV spectroscopy for spectral resolution and simultaneous quantitative determination of Myricetin, Kaempferol and Quercetin as flavonoids in Laurel, St. John's Wort and Green Tea without the need for any previous separation procedure. The developed methods are continuous wavelet transform (CWT) and least squares support vector machine (LS-SVM) methods integrated with UV spectroscopy individually. Different wavelet families were tested by CWT method and finally the Daubechies wavelet family (Db4) for Myricetin and the Gaussian wavelet families for Kaempferol (Gaus3) and Quercetin (Gaus7) were selected and applied for simultaneous analysis under the optimal conditions. The LS-SVM was applied to build the flavonoids prediction model based on absorption spectra. The root mean square errors for prediction (RMSEP) of Myricetin, Kaempferol and Quercetin were 0.0552, 0.0275 and 0.0374, respectively. The developed methods were validated by the analysis of the various synthetic mixtures associated with a well- known flavonoid contents. Mean recovery values of Myricetin, Kaempferol and Quercetin, in CWT method were 100.123, 100.253, 100.439 and in LS-SVM method were 99.94, 99.81 and 99.682, respectively. The results achieved by analyzing the real samples from the CWT and LS-SVM methods were compared to the HPLC reference method and the results were very close to the reference method. Meanwhile, the obtained results of the one-way ANOVA (analysis of variance) test revealed that there was no significant difference between the suggested methods. Copyright © 2015 Elsevier B.V. All rights reserved.

  12. Near-infrared Spectroscopy as a Process Analytical Technology Tool for Monitoring the Parching Process of Traditional Chinese Medicine Based on Two Kinds of Chemical Indicators.

    PubMed

    Li, Kaiyue; Wang, Weiying; Liu, Yanping; Jiang, Su; Huang, Guo; Ye, Liming

    2017-01-01

    The active ingredients and thus pharmacological efficacy of traditional Chinese medicine (TCM) at different degrees of parching process vary greatly. Near-infrared spectroscopy (NIR) was used to develop a new method for rapid online analysis of TCM parching process, using two kinds of chemical indicators (5-(hydroxymethyl) furfural [5-HMF] content and 420 nm absorbance) as reference values which were obviously observed and changed in most TCM parching process. Three representative TCMs, Areca ( Areca catechu L.), Malt ( Hordeum Vulgare L.), and Hawthorn ( Crataegus pinnatifida Bge.), were used in this study. With partial least squares regression, calibration models of NIR were generated based on two kinds of reference values, i.e. 5-HMF contents measured by high-performance liquid chromatography (HPLC) and 420 nm absorbance measured by ultraviolet-visible spectroscopy (UV/Vis), respectively. In the optimized models for 5-HMF, the root mean square errors of prediction (RMSEP) for Areca, Malt, and Hawthorn was 0.0192, 0.0301, and 0.2600 and correlation coefficients ( R cal ) were 99.86%, 99.88%, and 99.88%, respectively. Moreover, in the optimized models using 420 nm absorbance as reference values, the RMSEP for Areca, Malt, and Hawthorn was 0.0229, 0.0096, and 0.0409 and R cal were 99.69%, 99.81%, and 99.62%, respectively. NIR models with 5-HMF content and 420 nm absorbance as reference values can rapidly and effectively identify three kinds of TCM in different parching processes. This method has great promise to replace current subjective color judgment and time-consuming HPLC or UV/Vis methods and is suitable for rapid online analysis and quality control in TCM industrial manufacturing process. Near-infrared spectroscopy.(NIR) was used to develop a new method for online analysis of traditional Chinese medicine.(TCM) parching processCalibration and validation models of Areca, Malt, and Hawthorn were generated by partial least squares regression using 5.(hydroxymethyl) furfural contents and 420.nm absorbance as reference values, respectively, which were main indicator components during parching process of most TCMThe established NIR models of three TCMs had low root mean square errors of prediction and high correlation coefficientsThe NIR method has great promise for use in TCM industrial manufacturing processes for rapid online analysis and quality control. Abbreviations used: NIR: Near-infrared Spectroscopy; TCM: Traditional Chinese medicine; Areca: Areca catechu L.; Hawthorn: Crataegus pinnatifida Bge.; Malt: Hordeum vulgare L.; 5-HMF: 5-(hydroxymethyl) furfural; PLS: Partial least squares; D: Dimension faction; SLS: Straight line subtraction, MSC: Multiplicative scatter correction; VN: Vector normalization; RMSECV: Root mean square errors of cross-validation; RMSEP: Root mean square errors of validation; R cal : Correlation coefficients; RPD: Residual predictive deviation; PAT: Process analytical technology; FDA: Food and Drug Administration; ICH: International Conference on Harmonization of Technical Requirements for Registration of Pharmaceuticals for Human Use.

  13. Analysis of the multigroup model for muon tomography based threat detection

    NASA Astrophysics Data System (ADS)

    Perry, J. O.; Bacon, J. D.; Borozdin, K. N.; Fabritius, J. M.; Morris, C. L.

    2014-02-01

    We compare different algorithms for detecting a 5 cm tungsten cube using cosmic ray muon technology. In each case, a simple tomographic technique was used for position reconstruction, but the scattering angles were used differently to obtain a density signal. Receiver operating characteristic curves were used to compare images made using average angle squared, median angle squared, average of the squared angle, and a multi-energy group fit of the angular distributions for scenes with and without a 5 cm tungsten cube. The receiver operating characteristic curves show that the multi-energy group treatment of the scattering angle distributions is the superior method for image reconstruction.

  14. Non-destructive and rapid prediction of moisture content in red pepper (Capsicum annuum L.) powder using near-infrared spectroscopy and a partial least squares regression model

    USDA-ARS?s Scientific Manuscript database

    Purpose: The aim of this study was to develop a technique for the non-destructive and rapid prediction of the moisture content in red pepper powder using near-infrared (NIR) spectroscopy and a partial least squares regression (PLSR) model. Methods: Three red pepper powder products were separated in...

  15. Understory Vegetation and Overstory Growth in Pine and Pine-Hardwood Shelterwood Stands in the Ouachita Mountains: 5-Year Results

    Treesearch

    Michael G. Shelton

    2004-01-01

    Abstract - Treatments were two overstory compositions (a pine basal area of 30 square feet per acre with and without 15 square feet per acre of hardwoods) and two methods of submerchantable hardwood control (chainsaw felling with and without stump-applied herbicide). After the fifth growing season, pine regeneration averaged 1,870 seedlings per acre...

  16. Methods and computer program documentation for determining anisotropic transmissivity tensor components of two-dimensional ground-water flow

    USGS Publications Warehouse

    Maslia, M.L.; Randolph, R.B.

    1986-01-01

    The theory of anisotropic aquifer hydraulic properties and a computer program, written in Fortran 77, developed to compute the components of the anisotropic transmissivity tensor of two-dimensional groundwater flow are described. To determine the tensor components using one pumping well and three observation wells, the type-curve and straight-line approximation methods are developed. These methods are based on the equation of drawdown developed for two-dimensional nonsteady flow in an infinite anisotropic aquifer. To determine tensor components using more than three observation wells, a weighted least squares optimization procedure is described for use with the type-curve and straight-line approximation methods. The computer program described in this report allows the type-curve, straight-line approximation, and weighted least squares optimization methods to be used in conjunction with data from observation and pumping wells. Three example applications using the computer program and field data gathered during geohydrologic investigations at a site near Dawsonville, Georgia , are provided to illustrate the use of the computer program. The example applications demonstrate the use of the type-curve method using three observation wells, the weighted least squares optimization method using eight observation wells and equal weighting, and the weighted least squares optimization method using eight observation wells and unequal weighting. Results obtained using the computer program indicate major transmissivity in the range of 347-296 sq ft/day, minor transmissivity in the range of 139-99 sq ft/day, aquifer anisotropy in the range of 3.54 to 2.14, principal direction of flow in the range of N. 45.9 degrees E. to N. 58.7 degrees E., and storage coefficient in the range of 0.0063 to 0.0037. The numerical results are in good agreement with field data gathered on the weathered crystalline rocks underlying the investigation site. Supplemental material provides definitions of variables, data requirements and corresponding formats, input data and output results for the example applications, and a listing of the Fortran 77 computer code. (Author 's abstract)

  17. Evaluating the utility of mid-infrared spectral subspaces for predicting soil properties.

    PubMed

    Sila, Andrew M; Shepherd, Keith D; Pokhariyal, Ganesh P

    2016-04-15

    We propose four methods for finding local subspaces in large spectral libraries. The proposed four methods include (a) cosine angle spectral matching; (b) hit quality index spectral matching; (c) self-organizing maps and (d) archetypal analysis methods. Then evaluate prediction accuracies for global and subspaces calibration models. These methods were tested on a mid-infrared spectral library containing 1907 soil samples collected from 19 different countries under the Africa Soil Information Service project. Calibration models for pH, Mehlich-3 Ca, Mehlich-3 Al, total carbon and clay soil properties were developed for the whole library and for the subspace. Root mean square error of prediction was used to evaluate predictive performance of subspace and global models. The root mean square error of prediction was computed using a one-third-holdout validation set. Effect of pretreating spectra with different methods was tested for 1st and 2nd derivative Savitzky-Golay algorithm, multiplicative scatter correction, standard normal variate and standard normal variate followed by detrending methods. In summary, the results show that global models outperformed the subspace models. We, therefore, conclude that global models are more accurate than the local models except in few cases. For instance, sand and clay root mean square error values from local models from archetypal analysis method were 50% poorer than the global models except for subspace models obtained using multiplicative scatter corrected spectra with which were 12% better. However, the subspace approach provides novel methods for discovering data pattern that may exist in large spectral libraries.

  18. Space-time least-squares Petrov-Galerkin projection in nonlinear model reduction.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Choi, Youngsoo; Carlberg, Kevin Thomas

    Our work proposes a space-time least-squares Petrov-Galerkin (ST-LSPG) projection method for model reduction of nonlinear dynamical systems. In contrast to typical nonlinear model-reduction methods that first apply Petrov-Galerkin projection in the spatial dimension and subsequently apply time integration to numerically resolve the resulting low-dimensional dynamical system, the proposed method applies projection in space and time simultaneously. To accomplish this, the method first introduces a low-dimensional space-time trial subspace, which can be obtained by computing tensor decompositions of state-snapshot data. The method then computes discrete-optimal approximations in this space-time trial subspace by minimizing the residual arising after time discretization over allmore » space and time in a weighted ℓ 2-norm. This norm can be de ned to enable complexity reduction (i.e., hyper-reduction) in time, which leads to space-time collocation and space-time GNAT variants of the ST-LSPG method. Advantages of the approach relative to typical spatial-projection-based nonlinear model reduction methods such as Galerkin projection and least-squares Petrov-Galerkin projection include: (1) a reduction of both the spatial and temporal dimensions of the dynamical system, (2) the removal of spurious temporal modes (e.g., unstable growth) from the state space, and (3) error bounds that exhibit slower growth in time. Numerical examples performed on model problems in fluid dynamics demonstrate the ability of the method to generate orders-of-magnitude computational savings relative to spatial-projection-based reduced-order models without sacrificing accuracy.« less

  19. A study of various methods for calculating locations of lightning events

    NASA Technical Reports Server (NTRS)

    Cannon, John R.

    1995-01-01

    This article reports on the results of numerical experiments on finding the location of lightning events using different numerical methods. The methods include linear least squares, nonlinear least squares, statistical estimations, cluster analysis and angular filters and combinations of such techniques. The experiments involved investigations of methods for excluding fake solutions which are solutions that appear to be reasonable but are in fact several kilometers distant from the actual location. Some of the conclusions derived from the study are that bad data produces fakes, that no fool-proof method of excluding fakes was found, that a short base-line interferometer under development at Kennedy Space Center to measure the direction cosines of an event shows promise as a filter for excluding fakes. The experiments generated a number of open questions, some of which are discussed at the end of the report.

  20. A composite step conjugate gradients squared algorithm for solving nonsymmetric linear systems

    NASA Astrophysics Data System (ADS)

    Chan, Tony; Szeto, Tedd

    1994-03-01

    We propose a new and more stable variant of the CGS method [27] for solving nonsymmetric linear systems. The method is based on squaring the Composite Step BCG method, introduced recently by Bank and Chan [1,2], which itself is a stabilized variant of BCG in that it skips over steps for which the BCG iterate is not defined and causes one kind of breakdown in BCG. By doing this, we obtain a method (Composite Step CGS or CSCGS) which not only handles the breakdowns described above, but does so with the advantages of CGS, namely, no multiplications by the transpose matrix and a faster convergence rate than BCG. Our strategy for deciding whether to skip a step does not involve any machine dependent parameters and is designed to skip near breakdowns as well as produce smoother iterates. Numerical experiments show that the new method does produce improved performance over CGS on practical problems.

  1. One-dimensional stitching interferometry assisted by a triple-beam interferometer

    DOE PAGES

    Xue, Junpeng; Huang, Lei; Gao, Bo; ...

    2017-04-13

    In this work, we proposed for stitching interferometry to use a triple-beam interferometer to measure both the distance and the tilt for all sub-apertures before the stitching process. The relative piston between two neighboring sub-apertures is then calculated by using the data in the overlapping area. Comparisons are made between our method, and the classical least-squares principle stitching method. Our method can improve the accuracy and repeatability of the classical stitching method when a large number of sub-aperture topographies are taken into account. Our simulations and experiments on flat and spherical mirrors indicate that our proposed method can decrease themore » influence of the interferometer error from the stitched result. The comparison of stitching system with Fizeau interferometry data is about 2 nm root mean squares and the repeatability is within ± 2.5 nm peak to valley.« less

  2. Kinase Identification with Supervised Laplacian Regularized Least Squares

    PubMed Central

    Zhang, He; Wang, Minghui

    2015-01-01

    Phosphorylation is catalyzed by protein kinases and is irreplaceable in regulating biological processes. Identification of phosphorylation sites with their corresponding kinases contributes to the understanding of molecular mechanisms. Mass spectrometry analysis of phosphor-proteomes generates a large number of phosphorylated sites. However, experimental methods are costly and time-consuming, and most phosphorylation sites determined by experimental methods lack kinase information. Therefore, computational methods are urgently needed to address the kinase identification problem. To this end, we propose a new kernel-based machine learning method called Supervised Laplacian Regularized Least Squares (SLapRLS), which adopts a new method to construct kernels based on the similarity matrix and minimizes both structure risk and overall inconsistency between labels and similarities. The results predicted using both Phospho.ELM and an additional independent test dataset indicate that SLapRLS can more effectively identify kinases compared to other existing algorithms. PMID:26448296

  3. Kinase Identification with Supervised Laplacian Regularized Least Squares.

    PubMed

    Li, Ao; Xu, Xiaoyi; Zhang, He; Wang, Minghui

    2015-01-01

    Phosphorylation is catalyzed by protein kinases and is irreplaceable in regulating biological processes. Identification of phosphorylation sites with their corresponding kinases contributes to the understanding of molecular mechanisms. Mass spectrometry analysis of phosphor-proteomes generates a large number of phosphorylated sites. However, experimental methods are costly and time-consuming, and most phosphorylation sites determined by experimental methods lack kinase information. Therefore, computational methods are urgently needed to address the kinase identification problem. To this end, we propose a new kernel-based machine learning method called Supervised Laplacian Regularized Least Squares (SLapRLS), which adopts a new method to construct kernels based on the similarity matrix and minimizes both structure risk and overall inconsistency between labels and similarities. The results predicted using both Phospho.ELM and an additional independent test dataset indicate that SLapRLS can more effectively identify kinases compared to other existing algorithms.

  4. A least-squares finite element method for 3D incompressible Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Jiang, Bo-Nan; Lin, T. L.; Hou, Lin-Jun; Povinelli, Louis A.

    1993-01-01

    The least-squares finite element method (LSFEM) based on the velocity-pressure-vorticity formulation is applied to three-dimensional steady incompressible Navier-Stokes problems. This method can accommodate equal-order interpolations, and results in symmetric, positive definite algebraic system. An additional compatibility equation, i.e., the divergence of vorticity vector should be zero, is included to make the first-order system elliptic. The Newton's method is employed to linearize the partial differential equations, the LSFEM is used to obtain discretized equations, and the system of algebraic equations is solved using the Jacobi preconditioned conjugate gradient method which avoids formation of either element or global matrices (matrix-free) to achieve high efficiency. The flow in a half of 3D cubic cavity is calculated at Re = 100, 400, and 1,000 with 50 x 52 x 25 trilinear elements. The Taylor-Gortler-like vortices are observed at Re = 1,000.

  5. Simulation of speckle patterns with pre-defined correlation distributions.

    PubMed

    Song, Lipei; Zhou, Zhen; Wang, Xueyan; Zhao, Xing; Elson, Daniel S

    2016-03-01

    We put forward a method to easily generate a single or a sequence of fully developed speckle patterns with pre-defined correlation distribution by utilizing the principle of coherent imaging. The few-to-one mapping between the input correlation matrix and the correlation distribution between simulated speckle patterns is realized and there is a simple square relationship between the values of these two correlation coefficient sets. This method is demonstrated both theoretically and experimentally. The square relationship enables easy conversion from any desired correlation distribution. Since the input correlation distribution can be defined by a digital matrix or a gray-scale image acquired experimentally, this method provides a convenient way to simulate real speckle-related experiments and to evaluate data processing techniques.

  6. Studies of superresolution range-Doppler imaging

    NASA Astrophysics Data System (ADS)

    Zhu, Zhaoda; Ye, Zhenru; Wu, Xiaoqing; Yin, Jun; She, Zhishun

    1993-02-01

    This paper presents three superresolution imaging methods, including the linear prediction data extrapolation DFT (LPDEDFT), the dynamic optimization linear least squares (DOLLS), and the Hopfield neural network nonlinear least squares (HNNNLS). Live data of a metalized scale model B-52 aircraft, mounted on a rotating platform in a microwave anechoic chamber, have in this way been processed, as has a flying Boeing-727 aircraft. The imaging results indicate that, compared to the conventional Fourier method, either higher resolution for the same effective bandwidth of transmitted signals and total rotation angle in imaging, or equal-quality images from smaller bandwidth and total rotation, angle may be obtained by these superresolution approaches. Moreover, these methods are compared in respect of their resolution capability and computational complexity.

  7. Simulation of speckle patterns with pre-defined correlation distributions

    PubMed Central

    Song, Lipei; Zhou, Zhen; Wang, Xueyan; Zhao, Xing; Elson, Daniel S.

    2016-01-01

    We put forward a method to easily generate a single or a sequence of fully developed speckle patterns with pre-defined correlation distribution by utilizing the principle of coherent imaging. The few-to-one mapping between the input correlation matrix and the correlation distribution between simulated speckle patterns is realized and there is a simple square relationship between the values of these two correlation coefficient sets. This method is demonstrated both theoretically and experimentally. The square relationship enables easy conversion from any desired correlation distribution. Since the input correlation distribution can be defined by a digital matrix or a gray-scale image acquired experimentally, this method provides a convenient way to simulate real speckle-related experiments and to evaluate data processing techniques. PMID:27231589

  8. Microcanonical-ensemble computer simulation of the high-temperature expansion coefficients of the Helmholtz free energy of a square-well fluid

    NASA Astrophysics Data System (ADS)

    Sastre, Francisco; Moreno-Hilario, Elizabeth; Sotelo-Serna, Maria Guadalupe; Gil-Villegas, Alejandro

    2018-02-01

    The microcanonical-ensemble computer simulation method (MCE) is used to evaluate the perturbation terms Ai of the Helmholtz free energy of a square-well (SW) fluid. The MCE method offers a very efficient and accurate procedure for the determination of perturbation terms of discrete-potential systems such as the SW fluid and surpass the standard NVT canonical ensemble Monte Carlo method, allowing the calculation of the first six expansion terms. Results are presented for the case of a SW potential with attractive ranges 1.1 ≤ λ ≤ 1.8. Using semi-empirical representation of the MCE values for Ai, we also discuss the accuracy in the determination of the phase diagram of this system.

  9. Least Squares Shadowing sensitivity analysis of chaotic limit cycle oscillations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Qiqi, E-mail: qiqi@mit.edu; Hu, Rui, E-mail: hurui@mit.edu; Blonigan, Patrick, E-mail: blonigan@mit.edu

    2014-06-15

    The adjoint method, among other sensitivity analysis methods, can fail in chaotic dynamical systems. The result from these methods can be too large, often by orders of magnitude, when the result is the derivative of a long time averaged quantity. This failure is known to be caused by ill-conditioned initial value problems. This paper overcomes this failure by replacing the initial value problem with the well-conditioned “least squares shadowing (LSS) problem”. The LSS problem is then linearized in our sensitivity analysis algorithm, which computes a derivative that converges to the derivative of the infinitely long time average. We demonstrate ourmore » algorithm in several dynamical systems exhibiting both periodic and chaotic oscillations.« less

  10. Least-squares dual characterization for ROI assessment in emission tomography

    NASA Astrophysics Data System (ADS)

    Ben Bouallègue, F.; Crouzet, J. F.; Dubois, A.; Buvat, I.; Mariano-Goulart, D.

    2013-06-01

    Our aim is to describe an original method for estimating the statistical properties of regions of interest (ROIs) in emission tomography. Drawn upon the works of Louis on the approximate inverse, we propose a dual formulation of the ROI estimation problem to derive the ROI activity and variance directly from the measured data without any image reconstruction. The method requires the definition of an ROI characteristic function that can be extracted from a co-registered morphological image. This characteristic function can be smoothed to optimize the resolution-variance tradeoff. An iterative procedure is detailed for the solution of the dual problem in the least-squares sense (least-squares dual (LSD) characterization), and a linear extrapolation scheme is described to compensate for sampling partial volume effect and reduce the estimation bias (LSD-ex). LSD and LSD-ex are compared with classical ROI estimation using pixel summation after image reconstruction and with Huesman's method. For this comparison, we used Monte Carlo simulations (GATE simulation tool) of 2D PET data of a Hoffman brain phantom containing three small uniform high-contrast ROIs and a large non-uniform low-contrast ROI. Our results show that the performances of LSD characterization are at least as good as those of the classical methods in terms of root mean square (RMS) error. For the three small tumor regions, LSD-ex allows a reduction in the estimation bias by up to 14%, resulting in a reduction in the RMS error of up to 8.5%, compared with the optimal classical estimation. For the large non-specific region, LSD using appropriate smoothing could intuitively and efficiently handle the resolution-variance tradeoff.

  11. A Review of the Proposed K (sub Isi) Offset-Secant Method for Size-Independent Linear-Elastic Toughness Evaluation

    NASA Technical Reports Server (NTRS)

    James, Mark; Wells, Doug; Allen, Phillip; Wallin, Kim

    2017-01-01

    The proposed size-independent linear-elastic fracture toughness, K (sub Isi), for potential inclusion in ASTM E399 targets a consistent 0.5 millimeters crack extension for all specimen sizes through an offset secant that is a function of the specimen ligament length. The K (sub Isi) method also includes an increase in allowable deformation, and the removal of the P (sub max)/P (sub Q) criterion. A finite element study of the K (sub Isi) test method confirms the viability of the increased deformation limit, but has also revealed a few areas of concern. Findings: 1. The deformation limit, b (sub o) greater than or equal to 1.1 times (K (sub I) divided by delta (sub ys) squared) maintains a K-dominant crack tip field with limited plastic contribution to the fracture energy; 2. The three dimensional effects on compliance and the shape of the force versus CMOD (Crack-Mouth Opening Displacement) trace are significant compared to a plane strain assumption; 3. The non-linearity in the force versus CMOD trace at deformations higher than the current limit of 2.5 times (K (sub I) divided by delta (sub ys) squared) is sufficient to introduce error or even "false calls" regarding crack extension when using a constant offset secant line. This issue is more significant for specimens with W (width) greater than or equal to 2 inches; 4. A non-linear plasticity correction factor in the offset secant may improve the viability of the method at deformations between 2.5 times (K (sub I) divided by delta (sub ys) squared) and 1.1 times (K (sub I) divided by delta (sub ys) squared).

  12. Kernel-based least squares policy iteration for reinforcement learning.

    PubMed

    Xu, Xin; Hu, Dewen; Lu, Xicheng

    2007-07-01

    In this paper, we present a kernel-based least squares policy iteration (KLSPI) algorithm for reinforcement learning (RL) in large or continuous state spaces, which can be used to realize adaptive feedback control of uncertain dynamic systems. By using KLSPI, near-optimal control policies can be obtained without much a priori knowledge on dynamic models of control plants. In KLSPI, Mercer kernels are used in the policy evaluation of a policy iteration process, where a new kernel-based least squares temporal-difference algorithm called KLSTD-Q is proposed for efficient policy evaluation. To keep the sparsity and improve the generalization ability of KLSTD-Q solutions, a kernel sparsification procedure based on approximate linear dependency (ALD) is performed. Compared to the previous works on approximate RL methods, KLSPI makes two progresses to eliminate the main difficulties of existing results. One is the better convergence and (near) optimality guarantee by using the KLSTD-Q algorithm for policy evaluation with high precision. The other is the automatic feature selection using the ALD-based kernel sparsification. Therefore, the KLSPI algorithm provides a general RL method with generalization performance and convergence guarantee for large-scale Markov decision problems (MDPs). Experimental results on a typical RL task for a stochastic chain problem demonstrate that KLSPI can consistently achieve better learning efficiency and policy quality than the previous least squares policy iteration (LSPI) algorithm. Furthermore, the KLSPI method was also evaluated on two nonlinear feedback control problems, including a ship heading control problem and the swing up control of a double-link underactuated pendulum called acrobot. Simulation results illustrate that the proposed method can optimize controller performance using little a priori information of uncertain dynamic systems. It is also demonstrated that KLSPI can be applied to online learning control by incorporating an initial controller to ensure online performance.

  13. Geospatial distribution modeling and determining suitability of groundwater quality for irrigation purpose using geospatial methods and water quality index (WQI) in Northern Ethiopia

    NASA Astrophysics Data System (ADS)

    Gidey, Amanuel

    2018-06-01

    Determining suitability and vulnerability of groundwater quality for irrigation use is a key alarm and first aid for careful management of groundwater resources to diminish the impacts on irrigation. This study was conducted to determine the overall suitability of groundwater quality for irrigation use and to generate their spatial distribution maps in Elala catchment, Northern Ethiopia. Thirty-nine groundwater samples were collected to analyze and map the water quality variables. Atomic absorption spectrophotometer, ultraviolet spectrophotometer, titration and calculation methods were used for laboratory groundwater quality analysis. Arc GIS, geospatial analysis tools, semivariogram model types and interpolation methods were used to generate geospatial distribution maps. Twelve and eight water quality variables were used to produce weighted overlay and irrigation water quality index models, respectively. Root-mean-square error, mean square error, absolute square error, mean error, root-mean-square standardized error, measured values versus predicted values were used for cross-validation. The overall weighted overlay model result showed that 146 km2 areas are highly suitable, 135 km2 moderately suitable and 60 km2 area unsuitable for irrigation use. The result of irrigation water quality index confirms 10.26% with no restriction, 23.08% with low restriction, 20.51% with moderate restriction, 15.38% with high restriction and 30.76% with the severe restriction for irrigation use. GIS and irrigation water quality index are better methods for irrigation water resources management to achieve a full yield irrigation production to improve food security and to sustain it for a long period, to avoid the possibility of increasing environmental problems for the future generation.

  14. A novel approach to the experimental study on methane/steam reforming kinetics using the Orthogonal Least Squares method

    NASA Astrophysics Data System (ADS)

    Sciazko, Anna; Komatsu, Yosuke; Brus, Grzegorz; Kimijima, Shinji; Szmyd, Janusz S.

    2014-09-01

    For a mathematical model based on the result of physical measurements, it becomes possible to determine their influence on the final solution and its accuracy. However, in classical approaches, the influence of different model simplifications on the reliability of the obtained results are usually not comprehensively discussed. This paper presents a novel approach to the study of methane/steam reforming kinetics based on an advanced methodology called the Orthogonal Least Squares method. The kinetics of the reforming process published earlier are divergent among themselves. To obtain the most probable values of kinetic parameters and enable direct and objective model verification, an appropriate calculation procedure needs to be proposed. The applied Generalized Least Squares (GLS) method includes all the experimental results into the mathematical model which becomes internally contradicted, as the number of equations is greater than number of unknown variables. The GLS method is adopted to select the most probable values of results and simultaneously determine the uncertainty coupled with all the variables in the system. In this paper, the evaluation of the reaction rate after the pre-determination of the reaction rate, which was made by preliminary calculation based on the obtained experimental results over a Nickel/Yttria-stabilized Zirconia catalyst, was performed.

  15. Development of a Nonlinear Soft-Sensor Using a GMDH Network for a Refinery Crude Distillation Tower

    NASA Astrophysics Data System (ADS)

    Fujii, Kenzo; Yamamoto, Toru

    In atmospheric distillation processes, the stabilization of processes is required in order to optimize the crude-oil composition that corresponds to product market conditions. However, the process control systems sometimes fall into unstable states in the case where unexpected disturbances are introduced, and these unusual phenomena have had an undesirable affect on certain products. Furthermore, a useful chemical engineering model has not yet been established for these phenomena. This remains a serious problem in the atmospheric distillation process. This paper describes a new modeling scheme to predict unusual phenomena in the atmospheric distillation process using the GMDH (Group Method of Data Handling) network which is one type of network model. According to the GMDH network, the model structure can be determined systematically. However, the least squares method has been commonly utilized in determining weight coefficients (model parameters). Estimation accuracy is not entirely expected, because the sum of squared errors between the measured values and estimates is evaluated. Therefore, instead of evaluating the sum of squared errors, the sum of absolute value of errors is introduced and the Levenberg-Marquardt method is employed in order to determine model parameters. The effectiveness of the proposed method is evaluated by the foaming prediction in the crude oil switching operation in the atmospheric distillation process.

  16. A square wave is the most efficient and reliable waveform for resonant actuation of micro switches

    NASA Astrophysics Data System (ADS)

    Ben Sassi, S.; Khater, M. E.; Najar, F.; Abdel-Rahman, E. M.

    2018-05-01

    This paper investigates efficient actuation methods of shunt MEMS switches and other parallel-plate actuators. We start by formulating a multi-physics model of the micro switch, coupling the nonlinear Euler-Bernoulli beam theory with the nonlinear Reynolds equation to describe the structural and fluidic domains, respectively. The model takes into account fringing field effects as well as mid-plane stretching and squeeze film damping nonlinearities. Static analysis is undertaken using the differential quadrature method (DQM) to obtain the pull-in voltage, which is verified by means of the finite element model and validated experimentally. We develop a reduced order model employing the Galerkin method for the structural domain and DQM for the fluidic domain. The proposed waveforms are intended to be more suitable for integrated circuit standards. The dynamic response of the micro switch to harmonic, square and triangular waveforms are evaluated and compared experimentally and analytically. Low voltage actuation is obtained using dynamic pull-in with the proposed waveforms. In addition, global stability analysis carried out for the three signals shows advantages of employing the square signal as the actuation method in enhancing the performance of the micro switch in terms of actuation voltage, switching time, and sensitivity to initial conditions.

  17. Online Detection of Broken Rotor Bar Fault in Induction Motors by Combining Estimation of Signal Parameters via Min-norm Algorithm and Least Square Method

    NASA Astrophysics Data System (ADS)

    Wang, Pan-Pan; Yu, Qiang; Hu, Yong-Jun; Miao, Chang-Xin

    2017-11-01

    Current research in broken rotor bar (BRB) fault detection in induction motors is primarily focused on a high-frequency resolution analysis of the stator current. Compared with a discrete Fourier transformation, the parametric spectrum estimation technique has a higher frequency accuracy and resolution. However, the existing detection methods based on parametric spectrum estimation cannot realize online detection, owing to the large computational cost. To improve the efficiency of BRB fault detection, a new detection method based on the min-norm algorithm and least square estimation is proposed in this paper. First, the stator current is filtered using a band-pass filter and divided into short overlapped data windows. The min-norm algorithm is then applied to determine the frequencies of the fundamental and fault characteristic components with each overlapped data window. Next, based on the frequency values obtained, a model of the fault current signal is constructed. Subsequently, a linear least squares problem solved through singular value decomposition is designed to estimate the amplitudes and phases of the related components. Finally, the proposed method is applied to a simulated current and an actual motor, the results of which indicate that, not only parametric spectrum estimation technique.

  18. Squared eigenfunctions for the Sasa-Satsuma equation

    NASA Astrophysics Data System (ADS)

    Yang, Jianke; Kaup, D. J.

    2009-02-01

    Squared eigenfunctions are quadratic combinations of Jost functions and adjoint Jost functions which satisfy the linearized equation of an integrable equation. They are needed for various studies related to integrable equations, such as the development of its soliton perturbation theory. In this article, squared eigenfunctions are derived for the Sasa-Satsuma equation whose spectral operator is a 3×3 system, while its linearized operator is a 2×2 system. It is shown that these squared eigenfunctions are sums of two terms, where each term is a product of a Jost function and an adjoint Jost function. The procedure of this derivation consists of two steps: First is to calculate the variations of the potentials via variations of the scattering data by the Riemann-Hilbert method. The second one is to calculate the variations of the scattering data via the variations of the potentials through elementary calculations. While this procedure has been used before on other integrable equations, it is shown here, for the first time, that for a general integrable equation, the functions appearing in these variation relations are precisely the squared eigenfunctions and adjoint squared eigenfunctions satisfying, respectively, the linearized equation and the adjoint linearized equation of the integrable system. This proof clarifies this procedure and provides a unified explanation for previous results of squared eigenfunctions on individual integrable equations. This procedure uses primarily the spectral operator of the Lax pair. Thus two equations in the same integrable hierarchy will share the same squared eigenfunctions (except for a time-dependent factor). In the Appendix, the squared eigenfunctions are presented for the Manakov equations whose spectral operator is closely related to that of the Sasa-Satsuma equation.

  19. Comparing the tensile strength of square and reversing half-hitch alternating post knots

    PubMed Central

    Wu, Vincent; Sykes, Edward A.; Mercer, Dale; Hopman, Wilma M.; Tang, Ephraim

    2017-01-01

    Background Square knots are the gold standard in hand-tie wound closure, but are difficult to reproduce in deep cavities, inadvertently resulting in slipknots. The reversing half-hitch alternating post (RHAP) knot has been suggested as an alternative owing to its nonslip nature and reproducibility in limited spaces. We explored whether the RHAP knot is noninferior to the square knot by assessing tensile strength. Methods We conducted 10 trials for each baseline and knot configuration, using 3–0 silk and 3–0 polyglactin 910 sutures. We compared tensile strength between knot configurations at the point of knot failure between slippage and breakage. Results Maximal failure strength (mean ± SD) in square knots was reached with 4-throw in both silk (30 ± 1.5 N) and polyglactin 910 (39 ± 12 N). For RHAP knots, maximal failure strength was reached at 5-throw for both silk (31 ± 1.5 N) and polyglactin 910 (41 ± 13 N). In both sutures, there were no strength differences between 3-throw square and 4-throw RHAP, between 4-throw square and 5-throw RHAP, or between 5-throw square and 6-throw RHAP knots. Polyglactin 910 sutures, in all knot configurations, were more prone to slippage than silk sutures (p < 0.001). Conclusion The difference in mean tensile strength could be attributed to the proportion of knot slippage versus breakage, which is material-dependent. Future studies can re-evaluate findings in monofilament sutures and objectively assess the reproducibility of square and RHAP knots in deep cavities. Our results indicate that RHAP knots composed of 1 extra throw provide equivalent strength to square knots and may be an alternative when performing hand-ties in limited cavities with either silk or polyglactin 910 sutures. PMID:28327276

  20. New infinite families of exact sums of squares formulas, Jacobi elliptic functions, and Ramanujan's tau function.

    PubMed

    Milne, S C

    1996-12-24

    In this paper, we give two infinite families of explicit exact formulas that generalize Jacobi's (1829) 4 and 8 squares identities to 4n(2) or 4n(n + 1) squares, respectively, without using cusp forms. Our 24 squares identity leads to a different formula for Ramanujan's tau function tau(n), when n is odd. These results arise in the setting of Jacobi elliptic functions, Jacobi continued fractions, Hankel or Turánian determinants, Fourier series, Lambert series, inclusion/exclusion, Laplace expansion formula for determinants, and Schur functions. We have also obtained many additional infinite families of identities in this same setting that are analogous to the eta-function identities in appendix I of Macdonald's work [Macdonald, I. G. (1972) Invent. Math. 15, 91-143]. A special case of our methods yields a proof of the two conjectured [Kac, V. G. and Wakimoto, M. (1994) in Progress in Mathematics, eds. Brylinski, J.-L., Brylinski, R., Guillemin, V. & Kac, V. (Birkhäuser Boston, Boston, MA), Vol. 123, pp. 415-456] identities involving representing a positive integer by sums of 4n(2) or 4n(n + 1) triangular numbers, respectively. Our 16 and 24 squares identities were originally obtained via multiple basic hypergeometric series, Gustafson's C(l) nonterminating (6)phi(5) summation theorem, and Andrews' basic hypergeometric series proof of Jacobi's 4 and 8 squares identities. We have (elsewhere) applied symmetry and Schur function techniques to this original approach to prove the existence of similar infinite families of sums of squares identities for n(2) or n(n + 1) squares, respectively. Our sums of more than 8 squares identities are not the same as the formulas of Mathews (1895), Glaisher (1907), Ramanujan (1916), Mordell (1917, 1919), Hardy (1918, 1920), Kac and Wakimoto, and many others.

  1. Synchronous acquisition of multi-channel signals by single-channel ADC based on square wave modulation

    NASA Astrophysics Data System (ADS)

    Yi, Xiaoqing; Hao, Liling; Jiang, Fangfang; Xu, Lisheng; Song, Shaoxiu; Li, Gang; Lin, Ling

    2017-08-01

    Synchronous acquisition of multi-channel biopotential signals, such as electrocardiograph (ECG) and electroencephalograph, has vital significance in health care and clinical diagnosis. In this paper, we proposed a new method which is using single channel ADC to acquire multi-channel biopotential signals modulated by square waves synchronously. In this method, a specific modulate and demodulate method has been investigated without complex signal processing schemes. For each channel, the sampling rate would not decline with the increase of the number of signal channels. More specifically, the signal-to-noise ratio of each channel is n times of the time-division method or an improvement of 3.01 ×log2n dB, where n represents the number of the signal channels. A numerical simulation shows the feasibility and validity of this method. Besides, a newly developed 8-lead ECG based on the new method has been introduced. These experiments illustrate that the method is practicable and thus is potential for low-cost medical monitors.

  2. Reconstruction method for fluorescent X-ray computed tomography by least-squares method using singular value decomposition

    NASA Astrophysics Data System (ADS)

    Yuasa, T.; Akiba, M.; Takeda, T.; Kazama, M.; Hoshino, A.; Watanabe, Y.; Hyodo, K.; Dilmanian, F. A.; Akatsuka, T.; Itai, Y.

    1997-02-01

    We describe a new attenuation correction method for fluorescent X-ray computed tomography (FXCT) applied to image nonradioactive contrast materials in vivo. The principle of the FXCT imaging is that of computed tomography of the first generation. Using monochromatized synchrotron radiation from the BLNE-5A bending-magnet beam line of Tristan Accumulation Ring in KEK, Japan, we studied phantoms with the FXCT method, and we succeeded in delineating a 4-mm-diameter channel filled with a 500 /spl mu/g I/ml iodine solution in a 20-mm-diameter acrylic cylindrical phantom. However, to detect smaller iodine concentrations, attenuation correction is needed. We present a correction method based on the equation representing the measurement process. The discretized equation system is solved by the least-squares method using the singular value decomposition. The attenuation correction method is applied to the projections by the Monte Carlo simulation and the experiment to confirm its effectiveness.

  3. Environmental justice assessment for transportation : risk analysis

    DOT National Transportation Integrated Search

    1999-04-01

    This paper presents methods of comparing populations and their racial/ethnic compositions using tabulations, histograms, and Chi Squared tests for statistical significance of differences found. Two examples of these methods are presented: comparison ...

  4. Synthesis and optimization of four bar mechanism with six design parameters

    NASA Astrophysics Data System (ADS)

    Jaiswal, Ankur; Jawale, H. P.

    2018-04-01

    Function generation is synthesis of mechanism for specific task, involves complexity for specially synthesis above five precision of coupler points. Thus pertains to large structural error. The methodology for arriving to better precision solution is to use the optimization technique. Work presented herein considers methods of optimization of structural error in closed kinematic chain with single degree of freedom, for generating functions like log(x), ex, tan(x), sin(x) with five precision points. The equation in Freudenstein-Chebyshev method is used to develop five point synthesis of mechanism. The extended formulation is proposed and results are obtained to verify existing results in literature. Optimization of structural error is carried out using least square approach. Comparative structural error analysis is presented on optimized error through least square method and extended Freudenstein-Chebyshev method.

  5. Least squares reverse time migration of controlled order multiples

    NASA Astrophysics Data System (ADS)

    Liu, Y.

    2016-12-01

    Imaging using the reverse time migration of multiples generates inherent crosstalk artifacts due to the interference among different order multiples. Traditionally, least-square fitting has been used to address this issue by seeking the best objective function to measure the amplitude differences between the predicted and observed data. We have developed an alternative objective function by decomposing multiples into different orders to minimize the difference between Born modeling predicted multiples and specific-order multiples from observational data in order to attenuate the crosstalk. This method is denoted as the least-squares reverse time migration of controlled order multiples (LSRTM-CM). Our numerical examples demonstrated that the LSRTM-CM can significantly improve image quality compared with reverse time migration of multiples and least-square reverse time migration of multiples. Acknowledgments This research was funded by the National Nature Science Foundation of China (Grant Nos. 41430321 and 41374138).

  6. Flux synthesis of regular Bi4TaO8Cl square nanoplates exhibiting dominant exposure surfaces of {001} crystal facets for photocatalytic reduction of CO2 to methane.

    PubMed

    Li, Liang; Han, Qiutong; Tang, Lanqin; Zhang, Yuan; Li, Ping; Zhou, Yong; Zou, Zhigang

    2018-01-25

    Herein, orthorhombic regular Bi 4 TaO 8 Cl square nanoplates with an edge length of about 500 nm and a thickness of about 100 nm were successfully synthesized using a facile molten salt route. The as-prepared square nanoplates have been proven to be of {001} crystal facets as two dominantly exposed surfaces. The density functional theory calculation and photo-deposition of noble metal experiment demonstrate the electron and hole separation on different crystal facets and reveal that {001} crystal facets are in favor of the reduction reaction. Since the square nanoplate structure exhibits dominant exposure surfaces of the {001} facets, the molten salt route-based samples basically possess an obviously higher photocatalytic activity than those prepared by the solid state reaction (SSR) method. This study may provide inspiration for fabricating efficient photocatalysts.

  7. [Relationship between crown form of upper central incisors and papilla filling in Chinese Han-nationality youth].

    PubMed

    Yang, X; Le, D; Zhang, Y L; Liang, L Z; Yang, G; Hu, W J

    2016-10-18

    To explore a crown form classification method for upper central incisor which is more objective and scientific than traditional classification method based on the standardized photography technique. To analyze the relationship between crown form of upper central incisors and papilla filling in periodontally healthy Chinese Han-nationality youth. In the study, 180 periodontally healthy Chinese youth ( 75 males, and 105 females ) aged 20-30 (24.3±4.5) years were included. With the standardized upper central incisor photography technique, pictures of 360 upper central incisors were obtained. Each tooth was classified as triangular, ovoid or square by 13 experienced specialist majors in prothodontics independently and the final classification result was decided by most evaluators in order to ensure objectivity. The standardized digital photo was also used to evaluate the gingival papilla filling situation. The papilla filling result was recorded as present or absent according to naked eye observation. The papilla filling rates of different crown forms were analyzed. Statistical analyses were performed with SPSS 19.0. The proportions of triangle, ovoid and square forms of upper central incisor in Chinese Han-nationality youth were 31.4% (113/360), 37.2% (134/360) and 31.4% (113/360 ), respectively, and no statistical difference was found between the males and females. Average κ value between each two evaluators was 0.381. Average κ value was raised up to 0.563 when compared with the final classification result. In the study, 24 upper central incisors without contact were excluded, and the papilla filling rates of triangle, ovoid and square crown were 56.4% (62/110), 69.6% (87/125), 76.2% (77/101) separately. The papilla filling rate of square form was higher (P=0.007). The proportion of clinical crown form of upper central incisor in Chinese Han-nationality youth is obtained. Compared with triangle form, square form is found to favor a gingival papilla that fills the interproximal embrasure space. The consistency of the present classification method for upper central incisor is not satisfying, which indicates that a new classification method, more scientific and objective than the present one, is to be found.

  8. [Spectral quantitative analysis by nonlinear partial least squares based on neural network internal model for flue gas of thermal power plant].

    PubMed

    Cao, Hui; Li, Yao-Jiang; Zhou, Yan; Wang, Yan-Xia

    2014-11-01

    To deal with nonlinear characteristics of spectra data for the thermal power plant flue, a nonlinear partial least square (PLS) analysis method with internal model based on neural network is adopted in the paper. The latent variables of the independent variables and the dependent variables are extracted by PLS regression firstly, and then they are used as the inputs and outputs of neural network respectively to build the nonlinear internal model by train process. For spectra data of flue gases of the thermal power plant, PLS, the nonlinear PLS with the internal model of back propagation neural network (BP-NPLS), the non-linear PLS with the internal model of radial basis function neural network (RBF-NPLS) and the nonlinear PLS with the internal model of adaptive fuzzy inference system (ANFIS-NPLS) are compared. The root mean square error of prediction (RMSEP) of sulfur dioxide of BP-NPLS, RBF-NPLS and ANFIS-NPLS are reduced by 16.96%, 16.60% and 19.55% than that of PLS, respectively. The RMSEP of nitric oxide of BP-NPLS, RBF-NPLS and ANFIS-NPLS are reduced by 8.60%, 8.47% and 10.09% than that of PLS, respectively. The RMSEP of nitrogen dioxide of BP-NPLS, RBF-NPLS and ANFIS-NPLS are reduced by 2.11%, 3.91% and 3.97% than that of PLS, respectively. Experimental results show that the nonlinear PLS is more suitable for the quantitative analysis of glue gas than PLS. Moreover, by using neural network function which can realize high approximation of nonlinear characteristics, the nonlinear partial least squares method with internal model mentioned in this paper have well predictive capabilities and robustness, and could deal with the limitations of nonlinear partial least squares method with other internal model such as polynomial and spline functions themselves under a certain extent. ANFIS-NPLS has the best performance with the internal model of adaptive fuzzy inference system having ability to learn more and reduce the residuals effectively. Hence, ANFIS-NPLS is an accurate and useful quantitative thermal power plant flue gas analysis method.

  9. Improving the spectral measurement accuracy based on temperature distribution and spectra-temperature relationship

    NASA Astrophysics Data System (ADS)

    Li, Zhe; Feng, Jinchao; Liu, Pengyu; Sun, Zhonghua; Li, Gang; Jia, Kebin

    2018-05-01

    Temperature is usually considered as a fluctuation in near-infrared spectral measurement. Chemometric methods were extensively studied to correct the effect of temperature variations. However, temperature can be considered as a constructive parameter that provides detailed chemical information when systematically changed during the measurement. Our group has researched the relationship between temperature-induced spectral variation (TSVC) and normalized squared temperature. In this study, we focused on the influence of temperature distribution in calibration set. Multi-temperature calibration set selection (MTCS) method was proposed to improve the prediction accuracy by considering the temperature distribution of calibration samples. Furthermore, double-temperature calibration set selection (DTCS) method was proposed based on MTCS method and the relationship between TSVC and normalized squared temperature. We compare the prediction performance of PLS models based on random sampling method and proposed methods. The results from experimental studies showed that the prediction performance was improved by using proposed methods. Therefore, MTCS method and DTCS method will be the alternative methods to improve prediction accuracy in near-infrared spectral measurement.

  10. A Unified Approach to Teaching Quadratic and Cubic Equations.

    ERIC Educational Resources Information Center

    Ward, A. J. B.

    2003-01-01

    Presents a simple method for teaching the algebraic solution of cubic equations via completion of the cube. Shows that this method is readily accepted by students already familiar with completion of the square as a method for quadratic equations. (Author/KHR)

  11. Method for releasing hydrogen from ammonia borane

    DOEpatents

    Varma, Arvind; Diwan, Moiz; Shafirovich, Evgeny; Hwang, Hyun-Tae; Al-Kukhun, Ahmad

    2013-02-19

    A method of releasing hydrogen from ammonia borane is disclosed. The method comprises heating an aqueous ammonia borane solution to between about 80-135.degree. C. at between about 14.7 and 200 pounds per square inch absolute (psia) to release hydrogen by hydrothermolysis.

  12. Application of the Galerkin/least-squares formulation to the analysis of hypersonic flows. II - Flow past a double ellipse

    NASA Technical Reports Server (NTRS)

    Chalot, F.; Hughes, T. J. R.; Johan, Z.; Shakib, F.

    1991-01-01

    A finite element method for the compressible Navier-Stokes equations is introduced. The discretization is based on entropy variables. The methodology is developed within the framework of a Galerkin/least-squares formulation to which a discontinuity-capturing operator is added. Results for four test cases selected among those of the Workshop on Hypersonic Flows for Reentry Problems are presented.

  13. Detection of Glutamic Acid in Oilseed Rape Leaves Using Near Infrared Spectroscopy and the Least Squares-Support Vector Machine

    PubMed Central

    Bao, Yidan; Kong, Wenwen; Liu, Fei; Qiu, Zhengjun; He, Yong

    2012-01-01

    Amino acids are quite important indices to indicate the growth status of oilseed rape under herbicide stress. Near infrared (NIR) spectroscopy combined with chemometrics was applied for fast determination of glutamic acid in oilseed rape leaves. The optimal spectral preprocessing method was obtained after comparing Savitzky-Golay smoothing, standard normal variate, multiplicative scatter correction, first and second derivatives, detrending and direct orthogonal signal correction. Linear and nonlinear calibration methods were developed, including partial least squares (PLS) and least squares-support vector machine (LS-SVM). The most effective wavelengths (EWs) were determined by the successive projections algorithm (SPA), and these wavelengths were used as the inputs of PLS and LS-SVM model. The best prediction results were achieved by SPA-LS-SVM (Raw) model with correlation coefficient r = 0.9943 and root mean squares error of prediction (RMSEP) = 0.0569 for prediction set. These results indicated that NIR spectroscopy combined with SPA-LS-SVM was feasible for the fast and effective detection of glutamic acid in oilseed rape leaves. The selected EWs could be used to develop spectral sensors, and the important and basic amino acid data were helpful to study the function mechanism of herbicide. PMID:23203052

  14. Solving matrix effects exploiting the second-order advantage in the resolution and determination of eight tetracycline antibiotics in effluent wastewater by modelling liquid chromatography data with multivariate curve resolution-alternating least squares and unfolded-partial least squares followed by residual bilinearization algorithms II. Prediction and figures of merit.

    PubMed

    García, M D Gil; Culzoni, M J; De Zan, M M; Valverde, R Santiago; Galera, M Martínez; Goicoechea, H C

    2008-02-01

    A new powerful algorithm (unfolded-partial least squares followed by residual bilinearization (U-PLS/RBL)) was applied for first time on second-order liquid chromatography with diode array detection (LC-DAD) data and compared with a well-known established method (multivariate curve resolution-alternating least squares (MCR-ALS)) for the simultaneous determination of eight tetracyclines (tetracycline, oxytetracycline, meclocycline, minocycline, metacycline, chlortetracycline, demeclocycline and doxycycline) in wastewaters. Tetracyclines were pre-concentrated using Oasis Max C18 cartridges and then separated on a Thermo Aquasil C18 (150 mm x 4.6mm, 5 microm) column. The whole method was validated using Milli-Q water samples and both univariate and multivariate analytical figures of merit were obtained. Additionally, two data pre-treatment were applied (baseline correction and piecewise direct standardization), which allowed to correct the effect of breakthrough and to reduce the total interferences retained after pre-concentration of wastewaters. The results showed that the eight tetracycline antibiotics can be successfully determined in wastewaters, the drawbacks due to matrix interferences being adequately handled and overcome by using U-PSL/RBL.

  15. Comb model for the anomalous diffusion with dual-phase-lag constitutive relation

    NASA Astrophysics Data System (ADS)

    Liu, Lin; Zheng, Liancun; Fan, Yu; Chen, Yanping; Liu, Fawang

    2018-10-01

    As a development of the Fick's model, the dual-phase-lag constitutive relationship with macroscopic and microscopic relaxation characteristics is introduced to describe the anomalous diffusion in comb model. The Dirac delta function in the formulated governing equation represents the special spatial structure of comb model that the horizontal current only exists on the x axis. Solutions are obtained by analytical method with Laplace transform and Fourier transform. The dependence of concentration field and mean square displacement on different parameters are presented and discussed. Results show that the macroscopic and microscopic relaxation parameters have opposite effects on the particle distribution and mean square displacement. Furthermore, four significant results with constant 1/2 are concluded, namely the product of the particle number and the mean square displacement on the x axis equals to 1/2, the exponent of mean square displacement is 1/2 at the special case τq= τP, an asymptotic form of mean square displacement (MSD∼t1/2 as t→0, ∞) is obtained as well at the short time behavior and the long time behavior.

  16. Comparison of Warner-Bratzler shear force values between round and square cross-section cores from cooked beef and pork Longissimus muscle.

    PubMed

    Silva, Douglas R G; Torres Filho, Robledo A; Cazedey, Henrique P; Fontes, Paulo R; Ramos, Alcinéia L S; Ramos, Eduardo M

    2015-05-01

    This study was conducted to investigate the effect of core sampling on Warner-Bratzler shear force evaluations of beef and pork loins (Longissimus thoracis et lumborum muscles) and to determine the relationship between them. Steaks of 2.54 cm from beef and pork loins were cooked and five round cross-section cores and five square cross-section cores of each steak were taken for shear force evaluation. Core sampling influenced both beef and pork shear force values with higher (P<0.05) average values and standard deviations for square cross-section cores. There was a strong and linear relationship (P<0.01) between round and square cross-section cores for beef (R(2)=0.78), pork (R(2)=0.70) and for beef+pork (R(2)=0.82) samples. These results indicate that it is feasible to use square cross-section cores in Warner-Bratzler shear force protocol as an alternative and potential method to standardize sampling for shear force measurements. Copyright © 2014 Elsevier Ltd. All rights reserved.

  17. SLIDER: a generic metaheuristic for the discovery of correlated motifs in protein-protein interaction networks.

    PubMed

    Boyen, Peter; Van Dyck, Dries; Neven, Frank; van Ham, Roeland C H J; van Dijk, Aalt D J

    2011-01-01

    Correlated motif mining (cmm) is the problem of finding overrepresented pairs of patterns, called motifs, in sequences of interacting proteins. Algorithmic solutions for cmm thereby provide a computational method for predicting binding sites for protein interaction. In this paper, we adopt a motif-driven approach where the support of candidate motif pairs is evaluated in the network. We experimentally establish the superiority of the Chi-square-based support measure over other support measures. Furthermore, we obtain that cmm is an np-hard problem for a large class of support measures (including Chi-square) and reformulate the search for correlated motifs as a combinatorial optimization problem. We then present the generic metaheuristic slider which uses steepest ascent with a neighborhood function based on sliding motifs and employs the Chi-square-based support measure. We show that slider outperforms existing motif-driven cmm methods and scales to large protein-protein interaction networks. The slider-implementation and the data used in the experiments are available on http://bioinformatics.uhasselt.be.

  18. Inline Measurement of Particle Concentrations in Multicomponent Suspensions using Ultrasonic Sensor and Least Squares Support Vector Machines.

    PubMed

    Zhan, Xiaobin; Jiang, Shulan; Yang, Yili; Liang, Jian; Shi, Tielin; Li, Xiwen

    2015-09-18

    This paper proposes an ultrasonic measurement system based on least squares support vector machines (LS-SVM) for inline measurement of particle concentrations in multicomponent suspensions. Firstly, the ultrasonic signals are analyzed and processed, and the optimal feature subset that contributes to the best model performance is selected based on the importance of features. Secondly, the LS-SVM model is tuned, trained and tested with different feature subsets to obtain the optimal model. In addition, a comparison is made between the partial least square (PLS) model and the LS-SVM model. Finally, the optimal LS-SVM model with the optimal feature subset is applied to inline measurement of particle concentrations in the mixing process. The results show that the proposed method is reliable and accurate for inline measuring the particle concentrations in multicomponent suspensions and the measurement accuracy is sufficiently high for industrial application. Furthermore, the proposed method is applicable to the modeling of the nonlinear system dynamically and provides a feasible way to monitor industrial processes.

  19. Development of Jet Noise Power Spectral Laws Using SHJAR Data

    NASA Technical Reports Server (NTRS)

    Khavaran, Abbas; Bridges, James

    2009-01-01

    High quality jet noise spectral data measured at the Aeroacoustic Propulsion Laboratory at the NASA Glenn Research Center is used to examine a number of jet noise scaling laws. Configurations considered in the present study consist of convergent and convergent-divergent axisymmetric nozzles. Following the work of Viswanathan, velocity power factors are estimated using a least squares fit on spectral power density as a function of jet temperature and observer angle. The regression parameters are scrutinized for their uncertainty within the desired confidence margins. As an immediate application of the velocity power laws, spectral density in supersonic jets are decomposed into their respective components attributed to the jet mixing noise and broadband shock associated noise. Subsequent application of the least squares method on the shock power intensity shows that the latter also scales with some power of the shock parameter. A modified shock parameter is defined in order to reduce the dependency of the regression factors on the nozzle design point within the uncertainty margins of the least squares method.

  20. Least Square Fast Learning Network for modeling the combustion efficiency of a 300WM coal-fired boiler.

    PubMed

    Li, Guoqiang; Niu, Peifeng; Wang, Huaibao; Liu, Yongchao

    2014-03-01

    This paper presents a novel artificial neural network with a very fast learning speed, all of whose weights and biases are determined by the twice Least Square method, so it is called Least Square Fast Learning Network (LSFLN). In addition, there is another difference from conventional neural networks, which is that the output neurons of LSFLN not only receive the information from the hidden layer neurons, but also receive the external information itself directly from the input neurons. In order to test the validity of LSFLN, it is applied to 6 classical regression applications, and also employed to build the functional relation between the combustion efficiency and operating parameters of a 300WM coal-fired boiler. Experimental results show that, compared with other methods, LSFLN with very less hidden neurons could achieve much better regression precision and generalization ability at a much faster learning speed. Copyright © 2013 Elsevier Ltd. All rights reserved.

  1. Asymptotic Analysis Of The Total Least Squares ESPRIT Algorithm'

    NASA Astrophysics Data System (ADS)

    Ottersten, B. E.; Viberg, M.; Kailath, T.

    1989-11-01

    This paper considers the problem of estimating the parameters of multiple narrowband signals arriving at an array of sensors. Modern approaches to this problem often involve costly procedures for calculating the estimates. The ESPRIT (Estimation of Signal Parameters via Rotational Invariance Techniques) algorithm was recently proposed as a means for obtaining accurate estimates without requiring a costly search of the parameter space. This method utilizes an array invariance to arrive at a computationally efficient multidimensional estimation procedure. Herein, the asymptotic distribution of the estimation error is derived for the Total Least Squares (TLS) version of ESPRIT. The Cramer-Rao Bound (CRB) for the ESPRIT problem formulation is also derived and found to coincide with the variance of the asymptotic distribution through numerical examples. The method is also compared to least squares ESPRIT and MUSIC as well as to the CRB for a calibrated array. Simulations indicate that the theoretic expressions can be used to accurately predict the performance of the algorithm.

  2. Distance-Based Phylogenetic Methods Around a Polytomy.

    PubMed

    Davidson, Ruth; Sullivant, Seth

    2014-01-01

    Distance-based phylogenetic algorithms attempt to solve the NP-hard least-squares phylogeny problem by mapping an arbitrary dissimilarity map representing biological data to a tree metric. The set of all dissimilarity maps is a Euclidean space properly containing the space of all tree metrics as a polyhedral fan. Outputs of distance-based tree reconstruction algorithms such as UPGMA and neighbor-joining are points in the maximal cones in the fan. Tree metrics with polytomies lie at the intersections of maximal cones. A phylogenetic algorithm divides the space of all dissimilarity maps into regions based upon which combinatorial tree is reconstructed by the algorithm. Comparison of phylogenetic methods can be done by comparing the geometry of these regions. We use polyhedral geometry to compare the local nature of the subdivisions induced by least-squares phylogeny, UPGMA, and neighbor-joining when the true tree has a single polytomy with exactly four neighbors. Our results suggest that in some circumstances, UPGMA and neighbor-joining poorly match least-squares phylogeny.

  3. Determination of main fruits in adulterated nectars by ATR-FTIR spectroscopy combined with multivariate calibration and variable selection methods.

    PubMed

    Miaw, Carolina Sheng Whei; Assis, Camila; Silva, Alessandro Rangel Carolino Sales; Cunha, Maria Luísa; Sena, Marcelo Martins; de Souza, Scheilla Vitorino Carvalho

    2018-07-15

    Grape, orange, peach and passion fruit nectars were formulated and adulterated by dilution with syrup, apple and cashew juices at 10 levels for each adulterant. Attenuated total reflectance Fourier transform mid infrared (ATR-FTIR) spectra were obtained. Partial least squares (PLS) multivariate calibration models allied to different variable selection methods, such as interval partial least squares (iPLS), ordered predictors selection (OPS) and genetic algorithm (GA), were used to quantify the main fruits. PLS improved by iPLS-OPS variable selection showed the highest predictive capacity to quantify the main fruit contents. The selected variables in the final models varied from 72 to 100; the root mean square errors of prediction were estimated from 0.5 to 2.6%; the correlation coefficients of prediction ranged from 0.948 to 0.990; and, the mean relative errors of prediction varied from 3.0 to 6.7%. All of the developed models were validated. Copyright © 2018 Elsevier Ltd. All rights reserved.

  4. Beyond the single-file fluid limit using transfer matrix method: Exact results for confined parallel hard squares

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gurin, Péter; Varga, Szabolcs

    2015-06-14

    We extend the transfer matrix method of one-dimensional hard core fluids placed between confining walls for that case where the particles can pass each other and at most two layers can form. We derive an eigenvalue equation for a quasi-one-dimensional system of hard squares confined between two parallel walls, where the pore width is between σ and 3σ (σ is the side length of the square). The exact equation of state and the nearest neighbor distribution functions show three different structures: a fluid phase with one layer, a fluid phase with two layers, and a solid-like structure where the fluidmore » layers are strongly correlated. The structural transition between differently ordered fluids develops continuously with increasing density, i.e., no thermodynamic phase transition occurs. The high density structure of the system consists of clusters with two layers which are broken with particles staying in the middle of the pore.« less

  5. Cost-Sharing of Ecological Construction Based on Trapezoidal Intuitionistic Fuzzy Cooperative Games

    PubMed Central

    Liu, Jiacai; Zhao, Wenjian

    2016-01-01

    There exist some fuzziness and uncertainty in the process of ecological construction. The aim of this paper is to develop a direct and an effective simplified method for obtaining the cost-sharing scheme when some interested parties form a cooperative coalition to improve the ecological environment of Min River together. Firstly, we propose the solution concept of the least square prenucleolus of cooperative games with coalition values expressed by trapezoidal intuitionistic fuzzy numbers. Then, based on the square of the distance in the numerical value between two trapezoidal intuitionistic fuzzy numbers, we establish a corresponding quadratic programming model to obtain the least square prenucleolus, which can effectively avoid the information distortion and uncertainty enlargement brought about by the subtraction of trapezoidal intuitionistic fuzzy numbers. Finally, we give a numerical example about the cost-sharing of ecological construction in Fujian Province in China to show the validity, applicability, and advantages of the proposed model and method. PMID:27834830

  6. Experimental demonstrations in audible frequency range of band gap tunability and negative refraction in two-dimensional sonic crystal.

    PubMed

    Pichard, Hélène; Richoux, Olivier; Groby, Jean-Philippe

    2012-10-01

    The propagation of audible acoustic waves in two-dimensional square lattice tunable sonic crystals (SC) made of square cross-section infinitely rigid rods embedded in air is investigated experimentally. The band structure is calculated with the plane wave expansion (PWE) method and compared with experimental measurements carried out on a finite extend structure of 200 cm width, 70 cm depth and 15 cm height. The structure is made of square inclusions of 5 cm side with a periodicity of L = 7.5 cm placed inbetween two rigid plates. The existence of tunable complete band gaps in the audible frequency range is demonstrated experimentally by rotating the scatterers around their vertical axis. Negative refraction is then analyzed by use of the anisotropy of the equi-frequency surface (EFS) in the first band and of a finite difference time domain (FDTD) method. Experimental results finally show negative refraction in the audible frequency range.

  7. Application of partial least squares near-infrared spectral classification in diabetic identification

    NASA Astrophysics Data System (ADS)

    Yan, Wen-juan; Yang, Ming; He, Guo-quan; Qin, Lin; Li, Gang

    2014-11-01

    In order to identify the diabetic patients by using tongue near-infrared (NIR) spectrum - a spectral classification model of the NIR reflectivity of the tongue tip is proposed, based on the partial least square (PLS) method. 39sample data of tongue tip's NIR spectra are harvested from healthy people and diabetic patients , respectively. After pretreatment of the reflectivity, the spectral data are set as the independent variable matrix, and information of classification as the dependent variables matrix, Samples were divided into two groups - i.e. 53 samples as calibration set and 25 as prediction set - then the PLS is used to build the classification model The constructed modelfrom the 53 samples has the correlation of 0.9614 and the root mean square error of cross-validation (RMSECV) of 0.1387.The predictions for the 25 samples have the correlation of 0.9146 and the RMSECV of 0.2122.The experimental result shows that the PLS method can achieve good classification on features of healthy people and diabetic patients.

  8. Surgical technique for balancing posterior spinal fusions to the pelvis using the T square of Tolo.

    PubMed

    Andras, Lindsay; Yamaguchi, Kent T; Skaggs, David L; Tolo, Vernon T

    2012-12-01

    Correcting pelvic obliquity and improving sitting balance in neuromuscular scoliosis often requires fixation to the pelvis. We describe the use of a T square instrument to assist intraoperatively in evaluating the alignment of these curves and achieving balance in the coronal plane. The T square instrument was constructed with a vertical limb perpendicular to 2 horizontal limbs in a T formation. At the conclusion of the instrumentation and preliminary reduction maneuvers, the T square was positioned with the horizontal limbs parallel to the pelvis and the vertical limb in line with the central sacral line. If the spine and pelvis were well balanced, fluoroscopic images demonstrated that the superior aspect of the vertical limb of the T square was crossing the vertebral body of T1. If this was not shown, then some combination of compression, distraction, or a change in the contouring of the rods was performed until this balance was achieved. In this series, we describe case examples in which the T square has been successfully used to aid in achieving balance in the coronal plane. This technique helps to overcome the challenges with positioning and imaging often encountered in managing these long, rigid curves. The T square is a useful adjunct in balancing posterior spinal fusions and evaluating the correction of pelvic obliquity in cases of neuromuscular scoliosis. This novel, yet simple, T square technique can be used for any method of posterior spinal fusion with lumbopelvic fixation to assist in the intraoperative evaluation and achievement of balance in the coronal plane and has become routine at our institution. IV.

  9. Measurement of Thermal Conductivity of Porcine Liver in the Temperature Range of Cryotherapy and Hyperthermia (250~315k) by A Thermal Sensor Made of A Micron-Scale Enameled Copper Wire.

    PubMed

    Jiang, Z D; Zhao, G; Lu, G R

      BACKGROUND: Cryotherapy and hyperthermia are effective treatments for several diseases, especially for liver cancers. Thermal conductivity is a significant thermal property for the prediction and guidance of surgical procedure. However, the thermal conductivities of organs and tissues, especially over the temperature range of both cryotherapy and hyperthermia are scarce. To provide comprehensive thermal conductivity of liver for both cryotherapy and hyperthermia. A hot probe made of stain steel needle and micron-sized copper wire is used for measurement. To verify data processing, both the least square method and the Monte Carlo inversion method are used to determine the hot probe constants, respectively, with reference materials of water and 29.9 % Ca 2 Cl aqueous solution. Then the thermal conductivities of Hanks solution and pork liver bathed in Hanks solution are measured. The effective length for two methods is nearly the same, but the heat capacity of probe calibrated by the Monte Carlo inversion is temperature dependent. Fairly comprehensive thermal conductivity of porcine liver measured with these two methods in the target temperature range is verified to be similar. We provide an integrated thermal conductivity of liver for cryotherapy and hyperthermia in two methods, and make more accurate predictions possible for surgery. The least square method and the Monte Carlo inversion method have their advantages and disadvantages. The least square method is available for measurement of liquids that not prone to convection or solids in a wide temperature range, while the Monte Carlo inversion method is available for accurate and rapid measurement.

  10. Efficient Levenberg-Marquardt minimization of the maximum likelihood estimator for Poisson deviates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Laurence, T; Chromy, B

    2009-11-10

    Histograms of counted events are Poisson distributed, but are typically fitted without justification using nonlinear least squares fitting. The more appropriate maximum likelihood estimator (MLE) for Poisson distributed data is seldom used. We extend the use of the Levenberg-Marquardt algorithm commonly used for nonlinear least squares minimization for use with the MLE for Poisson distributed data. In so doing, we remove any excuse for not using this more appropriate MLE. We demonstrate the use of the algorithm and the superior performance of the MLE using simulations and experiments in the context of fluorescence lifetime imaging. Scientists commonly form histograms ofmore » counted events from their data, and extract parameters by fitting to a specified model. Assuming that the probability of occurrence for each bin is small, event counts in the histogram bins will be distributed according to the Poisson distribution. We develop here an efficient algorithm for fitting event counting histograms using the maximum likelihood estimator (MLE) for Poisson distributed data, rather than the non-linear least squares measure. This algorithm is a simple extension of the common Levenberg-Marquardt (L-M) algorithm, is simple to implement, quick and robust. Fitting using a least squares measure is most common, but it is the maximum likelihood estimator only for Gaussian-distributed data. Non-linear least squares methods may be applied to event counting histograms in cases where the number of events is very large, so that the Poisson distribution is well approximated by a Gaussian. However, it is not easy to satisfy this criterion in practice - which requires a large number of events. It has been well-known for years that least squares procedures lead to biased results when applied to Poisson-distributed data; a recent paper providing extensive characterization of these biases in exponential fitting is given. The more appropriate measure based on the maximum likelihood estimator (MLE) for the Poisson distribution is also well known, but has not become generally used. This is primarily because, in contrast to non-linear least squares fitting, there has been no quick, robust, and general fitting method. In the field of fluorescence lifetime spectroscopy and imaging, there have been some efforts to use this estimator through minimization routines such as Nelder-Mead optimization, exhaustive line searches, and Gauss-Newton minimization. Minimization based on specific one- or multi-exponential models has been used to obtain quick results, but this procedure does not allow the incorporation of the instrument response, and is not generally applicable to models found in other fields. Methods for using the MLE for Poisson-distributed data have been published by the wider spectroscopic community, including iterative minimization schemes based on Gauss-Newton minimization. The slow acceptance of these procedures for fitting event counting histograms may also be explained by the use of the ubiquitous, fast Levenberg-Marquardt (L-M) fitting procedure for fitting non-linear models using least squares fitting (simple searches obtain {approx}10000 references - this doesn't include those who use it, but don't know they are using it). The benefits of L-M include a seamless transition between Gauss-Newton minimization and downward gradient minimization through the use of a regularization parameter. This transition is desirable because Gauss-Newton methods converge quickly, but only within a limited domain of convergence; on the other hand the downward gradient methods have a much wider domain of convergence, but converge extremely slowly nearer the minimum. L-M has the advantages of both procedures: relative insensitivity to initial parameters and rapid convergence. Scientists, when wanting an answer quickly, will fit data using L-M, get an answer, and move on. Only those that are aware of the bias issues will bother to fit using the more appropriate MLE for Poisson deviates. However, since there is a simple, analytical formula for the appropriate MLE measure for Poisson deviates, it is inexcusable that least squares estimators are used almost exclusively when fitting event counting histograms. There have been ways found to use successive non-linear least squares fitting to obtain similarly unbiased results, but this procedure is justified by simulation, must be re-tested when conditions change significantly, and requires two successive fits. There is a great need for a fitting routine for the MLE estimator for Poisson deviates that has convergence domains and rates comparable to the non-linear least squares L-M fitting. We show in this report that a simple way to achieve that goal is to use the L-M fitting procedure not to minimize the least squares measure, but the MLE for Poisson deviates.« less

  11. Adaptive channel estimation for soft decision decoding over non-Gaussian optical channel

    NASA Astrophysics Data System (ADS)

    Xiang, Jing-song; Miao, Tao-tao; Huang, Sheng; Liu, Huan-lin

    2016-10-01

    An adaptive priori likelihood ratio (LLR) estimation method is proposed over non-Gaussian channel in the intensity modulation/direct detection (IM/DD) optical communication systems. Using the nonparametric histogram and the weighted least square linear fitting in the tail regions, the LLR is estimated and used for the soft decision decoding of the low-density parity-check (LDPC) codes. This method can adapt well to the three main kinds of intensity modulation/direct detection (IM/DD) optical channel, i.e., the chi-square channel, the Webb-Gaussian channel and the additive white Gaussian noise (AWGN) channel. The performance penalty of channel estimation is neglected.

  12. Least-squares luma-chroma demultiplexing algorithm for Bayer demosaicking.

    PubMed

    Leung, Brian; Jeon, Gwanggil; Dubois, Eric

    2011-07-01

    This paper addresses the problem of interpolating missing color components at the output of a Bayer color filter array (CFA), a process known as demosaicking. A luma-chroma demultiplexing algorithm is presented in detail, using a least-squares design methodology for the required bandpass filters. A systematic study of objective demosaicking performance and system complexity is carried out, and several system configurations are recommended. The method is compared with other benchmark algorithms in terms of CPSNR and S-CIELAB ∆E∗ objective quality measures and demosaicking speed. It was found to provide excellent performance and the best quality-speed tradeoff among the methods studied.

  13. Ferrocene-Boronic Acid-Fructose Binding Based on Dual-Plate Generator-Collector Voltammetry and Square-Wave Voltammetry.

    PubMed

    Li, Meng; Xu, Su-Ying; Gross, Andrew J; Hammond, Jules L; Estrela, Pedro; Weber, James; Lacina, Karel; James, Tony D; Marken, Frank

    2015-06-10

    The interaction of ferrocene-boronic acid with fructose is investigated in aqueous 0.1 m phosphate buffer at pH 7, 8 and 9. Two voltammetric methods, based on 1) a dual-plate generator-collector micro-trench electrode (steady state) and 2) a square-wave voltammetry (transient) method, are applied and compared in terms of mechanistic resolution. A combination of experimental data is employed to obtain new insights into the binding rates and the cumulative binding constants for both the reduced ferrocene-boronic acid (pH dependent and weakly binding) and for the oxidised ferrocene-boronic acid (pH independent and strongly binding).

  14. A weighted least squares approach to retrieve aerosol layer height over bright surfaces applied to GOME-2 measurements of the oxygen A band for forest fire cases over Europe

    NASA Astrophysics Data System (ADS)

    Nanda, Swadhin; Pepijn Veefkind, J.; de Graaf, Martin; Sneep, Maarten; Stammes, Piet; de Haan, Johan F.; Sanders, Abram F. J.; Apituley, Arnoud; Tuinder, Olaf; Levelt, Pieternel F.

    2018-06-01

    This paper presents a weighted least squares approach to retrieve aerosol layer height from top-of-atmosphere reflectance measurements in the oxygen A band (758-770 nm) over bright surfaces. A property of the measurement error covariance matrix is discussed, due to which photons travelling from the surface are given a higher preference over photons that scatter back from the aerosol layer. This is a potential source of biases in the estimation of aerosol properties over land, which can be mitigated by revisiting the design of the measurement error covariance matrix. The alternative proposed in this paper, which we call the dynamic scaling method, introduces a scene-dependent and wavelength-dependent modification in the measurement signal-to-noise ratio in order to influence this matrix. This method is generally applicable to other retrieval algorithms using weighted least squares. To test this method, synthetic experiments are done in addition to application to GOME-2A and GOME-2B measurements of the oxygen A band over the August 2010 Russian wildfires and the October 2017 Portugal wildfire plume over western Europe.

  15. Confidence Region of Least Squares Solution for Single-Arc Observations

    NASA Astrophysics Data System (ADS)

    Principe, G.; Armellin, R.; Lewis, H.

    2016-09-01

    The total number of active satellites, rocket bodies, and debris larger than 10 cm is currently about 20,000. Considering all resident space objects larger than 1 cm this rises to an estimated minimum of 500,000 objects. Latest generation sensor networks will be able to detect small-size objects, producing millions of observations per day. Due to observability constraints it is likely that long gaps between observations will occur for small objects. This requires to determine the space object (SO) orbit and to accurately describe the associated uncertainty when observations are acquired on a single arc. The aim of this work is to revisit the classical least squares method taking advantage of the high order Taylor expansions enabled by differential algebra. In particular, the high order expansion of the residuals with respect to the state is used to implement an arbitrary order least squares solver, avoiding the typical approximations of differential correction methods. In addition, the same expansions are used to accurately characterize the confidence region of the solution, going beyond the classical Gaussian distributions. The properties and performances of the proposed method are discussed using optical observations of objects in LEO, HEO, and GEO.

  16. Total sulfur determination in residues of crude oil distillation using FT-IR/ATR and variable selection methods

    NASA Astrophysics Data System (ADS)

    Müller, Aline Lima Hermes; Picoloto, Rochele Sogari; Mello, Paola de Azevedo; Ferrão, Marco Flores; dos Santos, Maria de Fátima Pereira; Guimarães, Regina Célia Lourenço; Müller, Edson Irineu; Flores, Erico Marlon Moraes

    2012-04-01

    Total sulfur concentration was determined in atmospheric residue (AR) and vacuum residue (VR) samples obtained from petroleum distillation process by Fourier transform infrared spectroscopy with attenuated total reflectance (FT-IR/ATR) in association with chemometric methods. Calibration and prediction set consisted of 40 and 20 samples, respectively. Calibration models were developed using two variable selection models: interval partial least squares (iPLS) and synergy interval partial least squares (siPLS). Different treatments and pre-processing steps were also evaluated for the development of models. The pre-treatment based on multiplicative scatter correction (MSC) and the mean centered data were selected for models construction. The use of siPLS as variable selection method provided a model with root mean square error of prediction (RMSEP) values significantly better than those obtained by PLS model using all variables. The best model was obtained using siPLS algorithm with spectra divided in 20 intervals and combinations of 3 intervals (911-824, 823-736 and 737-650 cm-1). This model produced a RMSECV of 400 mg kg-1 S and RMSEP of 420 mg kg-1 S, showing a correlation coefficient of 0.990.

  17. A novel second-order standard addition analytical method based on data processing with multidimensional partial least-squares and residual bilinearization.

    PubMed

    Lozano, Valeria A; Ibañez, Gabriela A; Olivieri, Alejandro C

    2009-10-05

    In the presence of analyte-background interactions and a significant background signal, both second-order multivariate calibration and standard addition are required for successful analyte quantitation achieving the second-order advantage. This report discusses a modified second-order standard addition method, in which the test data matrix is subtracted from the standard addition matrices, and quantitation proceeds via the classical external calibration procedure. It is shown that this novel data processing method allows one to apply not only parallel factor analysis (PARAFAC) and multivariate curve resolution-alternating least-squares (MCR-ALS), but also the recently introduced and more flexible partial least-squares (PLS) models coupled to residual bilinearization (RBL). In particular, the multidimensional variant N-PLS/RBL is shown to produce the best analytical results. The comparison is carried out with the aid of a set of simulated data, as well as two experimental data sets: one aimed at the determination of salicylate in human serum in the presence of naproxen as an additional interferent, and the second one devoted to the analysis of danofloxacin in human serum in the presence of salicylate.

  18. Applying ISO 11929:2010 Standard to detection limit calculation in least-squares based multi-nuclide gamma-ray spectrum evaluation

    NASA Astrophysics Data System (ADS)

    Kanisch, G.

    2017-05-01

    The concepts of ISO 11929 (2010) are applied to evaluation of radionuclide activities from more complex multi-nuclide gamma-ray spectra. From net peak areas estimated by peak fitting, activities and their standard uncertainties are calculated by weighted linear least-squares method with an additional step, where uncertainties of the design matrix elements are taken into account. A numerical treatment of the standard's uncertainty function, based on ISO 11929 Annex C.5, leads to a procedure for deriving decision threshold and detection limit values. The methods shown allow resolving interferences between radionuclide activities also in case of calculating detection limits where they can improve the latter by including more than one gamma line per radionuclide. The co"mmon single nuclide weighted mean is extended to an interference-corrected (generalized) weighted mean, which, combined with the least-squares method, allows faster detection limit calculations. In addition, a new grouped uncertainty budget was inferred, which for each radionuclide gives uncertainty budgets from seven main variables, such as net count rates, peak efficiencies, gamma emission intensities and others; grouping refers to summation over lists of peaks per radionuclide.

  19. Prediction of valid acidity in intact apples with Fourier transform near infrared spectroscopy.

    PubMed

    Liu, Yan-De; Ying, Yi-Bin; Fu, Xia-Ping

    2005-03-01

    To develop nondestructive acidity prediction for intact Fuji apples, the potential of Fourier transform near infrared (FT-NIR) method with fiber optics in interactance mode was investigated. Interactance in the 800 nm to 2619 nm region was measured for intact apples, harvested from early to late maturity stages. Spectral data were analyzed by two multivariate calibration techniques including partial least squares (PLS) and principal component regression (PCR) methods. A total of 120 Fuji apples were tested and 80 of them were used to form a calibration data set. The influences of different data preprocessing and spectra treatments were also quantified. Calibration models based on smoothing spectra were slightly worse than that based on derivative spectra, and the best result was obtained when the segment length was 5 nm and the gap size was 10 points. Depending on data preprocessing and PLS method, the best prediction model yielded correlation coefficient of determination (r2) of 0.759, low root mean square error of prediction (RMSEP) of 0.0677, low root mean square error of calibration (RMSEC) of 0.0562. The results indicated the feasibility of FT-NIR spectral analysis for predicting apple valid acidity in a nondestructive way.

  20. A Temperature Compensation Method for Piezo-Resistive Pressure Sensor Utilizing Chaotic Ions Motion Algorithm Optimized Hybrid Kernel LSSVM.

    PubMed

    Li, Ji; Hu, Guoqing; Zhou, Yonghong; Zou, Chong; Peng, Wei; Alam Sm, Jahangir

    2016-10-14

    A piezo-resistive pressure sensor is made of silicon, the nature of which is considerably influenced by ambient temperature. The effect of temperature should be eliminated during the working period in expectation of linear output. To deal with this issue, an approach consists of a hybrid kernel Least Squares Support Vector Machine (LSSVM) optimized by a chaotic ions motion algorithm presented. To achieve the learning and generalization for excellent performance, a hybrid kernel function, constructed by a local kernel as Radial Basis Function (RBF) kernel, and a global kernel as polynomial kernel is incorporated into the Least Squares Support Vector Machine. The chaotic ions motion algorithm is introduced to find the best hyper-parameters of the Least Squares Support Vector Machine. The temperature data from a calibration experiment is conducted to validate the proposed method. With attention on algorithm robustness and engineering applications, the compensation result shows the proposed scheme outperforms other compared methods on several performance measures as maximum absolute relative error, minimum absolute relative error mean and variance of the averaged value on fifty runs. Furthermore, the proposed temperature compensation approach lays a foundation for more extensive research.

Top