Pfalzer, Lucinda; Fry, Donna
2011-01-01
Pulmonary muscle weakness is common in ambulatory people with multiple sclerosis (MS) and may lead to deficits in mobility function. The purpose of this study was to examine the effect of a 10-week home-based exercise program using an inspiratory muscle threshold trainer (IMT) on the results of four lower-extremity physical performance tests in people with MS. The study design was a two-group (experimental-control), pretest-posttest study. Outcome measures consisted of pulmonary function measures including maximal inspiratory pressure (MIP), maximal expiratory pressure (MEP), and maximal voluntary ventilation (MVV), and the following lower-extremity physical performance measures: the 6-Minute Walk (6MW) distance, gait velocity (GV), the Sit-to-Stand Test (SST), the Functional Stair Test (FST), and a balance test (BAL). A total of 46 ambulatory participants (Expanded Disability Status Scale [EDSS] score, 2.0-6.5) with MS were randomly assigned to an intervention group (mean EDSS score, 4.1) that received 10 weeks of home-based inspiratory muscle training or a nontreatment control group (mean EDSS score, 3.2). Of the original 46 participants, 20 intervention group participants and 19 control group participants completed the study. Compared with the control group, the intervention group made significantly greater gains in inspiratory muscle strength (P = .003) and timed balance scores (P = .008). A nonsignificant improvement in 6MW distance (P = .086) was also noted in the IMT-trained group as compared with the control group. This is the first study directly linking improvement in respiratory function to improvement in physical performance function in people with mild-to-moderate disability due to MS. PMID:24453703
Rabies in a 10-week-old puppy.
White, Jennifer; Taylor, Susan M; Wolfram, Kathrin L; O'Conner, Brendan P
2007-09-01
A 10-week-old, male, border collie-cross puppy was examined for an acute onset of unilateral vestibular signs. Neurologic deterioration was rapid over the next 12 hours and the puppy was euthanized. Rabies was diagnosed by histopathologic and immunohistochemical examination. PMID:17966334
ERIC Educational Resources Information Center
Curtis, David F.
2010-01-01
This investigation examined the effectiveness of a pilot, manualized 10-week intervention of family skills training for ADHD-related symptoms. The intervention combined behavioral parent training and child focused behavioral activation therapy. Participants were families with children ages 7-10 diagnosed with ADHD-Combined Type. This pilot…
ERIC Educational Resources Information Center
Schmidt, Mirko; Valkanover, Stefan; Roebers, Claudia; Conzelmann, Achim
2013-01-01
Most physical education intervention studies on the positive effect of sports on self-concept development have attempted to "increase" schoolchildren's self-concept without taking the "veridicality" of the self-concept into account. The present study investigated whether a 10-week intervention in physical education would…
Yang, Wen-Wen; Liu, Ya-Chen; Lu, Lee-Chang; Chang, Hsiao-Yun; Chou, Paul Pei-Hsi; Liu, Chiang
2013-12-01
Compared with regulation-weight baseballs, lightweight baseballs generate lower torque on the shoulder and elbow joints without altering the pitching movement and timing. This study investigates the throwing accuracy, throwing velocity, arm swing velocity, and maximum shoulder external rotation (MSER) of adolescent players after 10 weeks of pitching training with appropriate lightweight baseballs. We assigned 24 adolescent players to a lightweight baseball group (group L) and a regulation-weight baseball group (group R) based on their pretraining throwing velocity. Both groups received pitching training 3 times per week for 10 weeks with 4.4- and 5-oz baseballs. The players' throwing accuracy, throwing velocity, arm swing velocity, and MSER were measured from 10 maximum efforts throws using a regulation-weight baseball before and after undergoing the pitching training. The results showed that the players in group L significantly increased their throwing velocity and arm swing velocity (p < 0.05) after 10 weeks of pitching training with the 4.4-oz baseball, whereas group R did not (p > 0.05). Furthermore, the percentage change in the throwing velocity and arm swing velocity of group L was significantly superior to that of group R (p < 0.05). Thus, we concluded that the 10 weeks of pitching training with an appropriate lightweight baseball substantially enhanced the arm swing velocity and throwing velocity of the adolescent baseball players. These findings suggest that using a lightweight baseball, which can reduce the risk of injury without altering pitching patterns, has positive training effects on players in the rapid physical growth and technique development stage. PMID:23603999
Comparison of upper body strength gains between men and women after 10 weeks of resistance training
Steele, James; Pereira, Maria C.; Castanheira, Rafael P.M.; Paoli, Antonio; Bottaro, Martim
2016-01-01
Resistance training (RT) offers benefits to both men and women. However, the studies about the differences between men and women in response to an RT program are not conclusive and few data are available about upper body strength response. The aim of this study was to compare elbow flexor strength gains in men and women after 10 weeks of RT. Forty-four college-aged men (22.63 ± 2.34 years) and forty-seven college-aged women (21.62 ± 2.96 years) participated in the study. The RT program was performed two days a week for 10 weeks. Before and after the training period, peak torque (PT) of the elbow flexors was measured with an isokinetic dynamometer. PT values were higher in men in comparison to women in pre- and post-tests (p < 0.01). Both males and females significantly increased elbow flexor strength (p < 0.05); however, strength changes did not differ between genders after 10 weeks of RT program (11.61 and 11.76% for men and women, respectively; p > 0.05). Effect sizes were 0.57 and 0.56 for men and women, respectively. In conclusion, the present study suggests that men and women have a similar upper body strength response to RT. PMID:26893958
Fry, Donna
2011-01-01
Pulmonary muscle weakness is common in ambulatory people with multiple sclerosis (MS) and may lead to deficits in mobility function. The purpose of this study was to examine the effect of a 10-week home-based exercise program using an inspiratory muscle threshold trainer (IMT) on the results of four lower-extremity physical performance tests in people with MS. The study design was a two-group (experimental-control), pretest-posttest study. Outcome measures consisted of pulmonary function measures including maximal inspiratory pressure (MIP), maximal expiratory pressure (MEP), and maximal voluntary ventilation (MVV), and the following lower-extremity physical performance measures: the 6-Minute Walk (6MW) distance, gait velocity (GV), the Sit-to-Stand Test (SST), the Functional Stair Test (FST), and a balance test (BAL). A total of 46 ambulatory participants (Expanded Disability Status Scale [EDSS] score, 2.0–6.5) with MS were randomly assigned to an intervention group (mean EDSS score, 4.1) that received 10 weeks of home-based inspiratory muscle training or a nontreatment control group (mean EDSS score, 3.2). Of the original 46 participants, 20 intervention group participants and 19 control group participants completed the study. Compared with the control group, the intervention group made significantly greater gains in inspiratory muscle strength (P = .003) and timed balance scores (P = .008). A nonsignificant improvement in 6MW distance (P = .086) was also noted in the IMT-trained group as compared with the control group. This is the first study directly linking improvement in respiratory function to improvement in physical performance function in people with mild-to-moderate disability due to MS. PMID:24453703
High-intensity physical training in adults with asthma. A 10-week rehabilitation program.
Emtner, M; Herala, M; Stålenheim, G
1996-02-01
Twenty-six adults (23 to 58 years) with mild to moderate asthma underwent a 10-week supervised rehabilitation program, with emphasis on physical training. In the first 2 weeks, they exercised daily in an indoor swimming pool (33 degrees C) and received education about asthma, medication, and principles of physical training. In the following 8 weeks, they exercised in the pool twice a week. Every training session lasted 45 min. The training sessions were made as suitable as possible for the individual subjects, in order to minimize "drop outs" from the program. The aim of the study was to evaluate the efficacy of the rehabilitation program and to determine if inactive asthmatic adults can exercise at high intensity. The rehabilitation program was preceded by a 6-min submaximal cycle ergometry test, a 12-min walking test, spirometry, and a methacholine provocation test. The subjects also responded to a five-item questionnaire related to anxiety about exercise, breathlessness, and asthma symptoms using a visual analogue scale. All subjects were able to perform physical training at a very high intensity, to 80 to 90% of their predicted maximal heart rate. No asthmatic attacks occurred in connection with the training sessions. Twenty-two of the 26 subjects completed the rehabilitation program, felt confident with physical training, and planned to continue regular physical training after the 10-week program. Improvements in cardiovascular conditioning, measured as a decreased heart rate at the same load on the cycle ergometer (average of 12 beats/min), and as a longer distance at the 12-min walking test (average of 111 m), were observed during the program. FEV1 increased significantly from 2.2 to 2.5 L. Forced expiratory flow at 25% of vital capacity also increased slightly but significantly. Methacholine provocation dose causing a fall in FEV1 by 20% was unchanged. Seventeen subjects had a peak expiratory flow reduction of more than 15% after the preprogram ergometry
ERIC Educational Resources Information Center
Dreyer, Lukas; Dreyer, Sonja; Rankin, Dean
2012-01-01
This study examined the effect of a 10-week physical exercise program on the health status of college staff. Eighty-one participants were pre-tested on 22 variables including physical fitness, biochemical status, psychological health, and morphological measures. Participants in an experimental group (n = 61) received a 10-week intervention…
Shahril, Mohd Razif; Wan Dali, Wan Putri Elena; Lua, Pei Lin
2013-01-01
The aim of the study was to evaluate the effectiveness of implementing multimodal nutrition education intervention (NEI) to improve dietary intake among university students. The design of study used was cluster randomised controlled design at four public universities in East Coast of Malaysia. A total of 417 university students participated in the study. They were randomly selected and assigned into two arms, that is, intervention group (IG) or control group (CG) according to their cluster. The IG received 10-week multimodal intervention using three modes (conventional lecture, brochures, and text messages) while CG did not receive any intervention. Dietary intake was assessed before and after intervention and outcomes reported as nutrient intakes as well as average daily servings of food intake. Analysis of covariance (ANCOVA) and adjusted effect size were used to determine difference in dietary changes between groups and time. Results showed that, compared to CG, participants in IG significantly improved their dietary intake by increasing their energy intake, carbohydrate, calcium, vitamin C and thiamine, fruits and 100% fruit juice, fish, egg, milk, and dairy products while at the same time significantly decreased their processed food intake. In conclusion, multimodal NEI focusing on healthy eating promotion is an effective approach to improve dietary intakes among university students. PMID:24069535
Sensitivity of vergence responses of 5- to 10-week-old human infants.
Seemiller, Eric S; Wang, Jingyun; Candy, T Rowan
2016-01-01
Infants have been shown to make vergence eye movements by 1 month of age to stimulation with prisms or targets moving in depth. However, little is currently understood about the threshold sensitivity of the maturing visual system to such stimulation. In this study, 5- to 10-week-old human infants and adults viewed a target moving in depth as a triangle wave of three amplitudes (1.0, 0.5, and 0.25 meter angles). Their horizontal eye position and the refractive state of both eyes were measured simultaneously. The vergence responses of the infants and adults varied at the same frequency as the stimulus at the three tested modulation amplitudes. For a typical infant of this age, the smallest amplitude is equivalent to an interocular change of approximately 2° of retinal disparity, from nearest to farthest points. The infants' accommodation responses only modulated reliably to the largest stimulus, while adults responded to all three amplitudes. Although the accommodative system appears relatively insensitive, the sensitivity of the vergence responses suggests that subtle cues are available to drive vergence in the second month after birth. PMID:26891827
Physiological responses in rock climbing with repeated ascents over a 10-week period.
España-Romero, Vanesa; Jensen, Randall L; Sanchez, Xavier; Ostrowski, Megan L; Szekely, Jay E; Watts, Phillip B
2012-03-01
The purpose was to analyze the physiological responses and energy expenditure during repeated ascents of the same climbing route over a 10-week period. Nine climbers completed nine ascents of a specific route spaced 1 week apart. Expired air was analyzed continuously during each ascent, and time of ascent was recorded to the nearest second. Energy expenditure during climbing (EE(CLM)), and during climbing +10 min recovery (EE(TOT)) was calculated by the Weir and Zuntz equations. Differences among ascents 1, 4, 6 and 9 were analyzed by repeated measures ANOVA. Climbing time was longer for ascent 1 compared with ascents 4, 6 and 9 (P < 0.001). Differences were found for EE(CLM) (kcal; P < 0.001), between ascent 1 versus 6 and 9 and ascent 4 versus 9, using both Zuntz and Weir equations. Also, differences were observed in EE for recovery (P < 0.05) and EE(TOT) (P < 0.05) using both equations. Repeated ascents of a climbing route decreased the climbing time and absolute energy expenditure during climbing. Initially, the decrease in climbing energy expenditure is accompanied by an increase in energy expenditure during recovery; however, by the ninth ascent, the total energy expenditure of the task is lower than for ascent 1. PMID:21674246
Acute and medium term effects of a 10-week running intervention on mood state in apprentices
Walter, Katrin; von Haaren, Birte; Löffler, Simone; Härtel, Sascha; Jansen, Carl-Philipp; Werner, Christian; Stumpp, Jürgen; Bös, Klaus; Hey, Stefan
2013-01-01
Exercise and physical activity have proven benefits for physical and psychological well-being. However, it is not clear if healthy young adults can enhance mood in everyday life through regular exercise. Earlier studies mainly showed positive effects of acute exercise and exercise programs on psychological well-being in children, older people and in clinical populations. Few studies controlled participants' physical activity in daily life, performed besides the exercise program, which can impact results. In addition the transition from mood enhancement induced by acute exercise to medium or long-term effects due to regular exercise is not yet determined. The purpose of this pilot study was to examine the acute effects of an aerobic running training on mood and trends in medium term changes of mood in everyday life of young adults. We conducted a 10-week aerobic endurance training with frequent mood assessments and continuous activity monitoring. 23 apprentices, separated into experimental and control group, were monitored over 12 weeks. To control the effectiveness of the aerobic exercise program, participants completed a progressive treadmill test pre and post the intervention period. The three basic mood dimensions energetic arousal, valence and calmness were assessed via electronic diaries. Participants had to rate their mood state frequently on 3 days a week at five times of measurement within 12 weeks. Participants' physical activity was assessed with accelerometers. All mood dimensions increased immediately after acute endurance exercise but results were not significant. The highest acute mood change could be observed in valence (p = 0.07; η2 = 0.27). However, no medium term effects in mood states could be observed after a few weeks of endurance training. Future studies should focus on the interaction between acute and medium term effects of exercise training on mood. The decreasing compliance over the course of the study requires the development of
Anaerobic power in road cyclists is improved after 10 weeks of whole-body vibration training.
Oosthuyse, Tanja; Viedge, Alison; McVeigh, Joanne; Avidon, Ingrid
2013-02-01
Whole-body vibration (WBV) training has previously improved muscle power in various athletic groups requiring explosive muscle contractions. To evaluate the benefit of including WBV as a training adjunct for improving aerobic and anaerobic cycling performance, road cyclists (n = 9) performed 3 weekly, 10-minute sessions of intermittent WBV on synchronous vertical plates (30 Hz) while standing in a static posture. A control group of cyclists (n = 8) received no WBV training. Before and after the 10-week intervention period, lean body mass (LBM), cycling aerobic peak power (Wmax), 4 mM lactate concentration (OBLA), VO2peak, and Wingate anaerobic peak and mean power output were determined. The WBV group successfully completed all WBV sessions but reported a significant 30% decrease in the weekly cycling training time (pre: 9.4 ± 3.3 h·wk(-1); post: 6.7 ± 3.7 h·wk(-1); p = 0.01) that resulted in a 6% decrease in VO2peak and a 4% decrease in OBLA. The control group reported a nonsignificant 6% decrease in cycling training volume (pre: 9.5 ± 3.6 h·wk(-1); 8.6 ± 2.9 h·wk(-1); p = 0.13), and all measured variables were maintained. Despite the evidence of detraining in the WBV group, Wmax was maintained (pre: 258 ± 53 W; post: 254 ± 57 W; p = 0.43). Furthermore, Wingate peak power increased by 6% (668 ± 189 to 708 ± 220 W; p = 0.055), and Wingate mean power increased by 2% (553 ± 157 to 565 ± 157 W; p = 0.006) in the WBV group from preintervention to postintervention, respectively, without any change to LBM. The WBV training is an attractive training supplement for improving anaerobic power without increasing muscle mass in road cyclists. PMID:22531614
Impact of 10-weeks of yoga practice on flexibility and balance of college athletes
Polsgrove, M Jay; Eggleston, Brandon M; Lockyer, Roch J
2016-01-01
Background: With clearer evidence of its benefits, coaches, and athletes may better see that yoga has a role in optimizing performance. Aims: To determine the impact of yoga on male college athletes (N = 26). Methods: Over a 10-week period, a yoga group (YG) of athletes (n = 14) took part in biweekly yoga sessions; while a nonyoga group (NYG) of athletes (n = 12) took part in no additional yoga activity. Performance measures were obtained immediately before and after this period. Measurements of flexibility and balance, included: Sit-reach (SR), shoulder flexibility (SF), and stork stand (SS); dynamic measurements consisted of joint angles (JA) measured during the performance of three distinct yoga positions (downward dog [DD]; right foot lunge [RFL]; chair [C]). Results: Significant gains were observed in the YG for flexibility (SR, P = 0.01; SF, P = 0.03), and balance (SS, P = 0.05). No significant differences were observed in the NYG for flexibility and balance. Significantly, greater JA were observed in the YG for: RFL (dorsiflexion, l-ankle; P = 0.04), DD (extension, r-knee, P = 0.04; r-hip; P = 0.01; flexion, r-shoulder; P = 0.01) and C (flexion, r-knee; P = 0.01). Significant JA differences were observed in the NYG for: DD (flexion, r-knee, P = 0.01: r-hip, P = 0.05; r-shoulder, P = 0.03) and C (flexion r-knee, P = 0.01; extension, r-shoulder; P = 0.05). A between group comparison revealed the significant differences for: RFL (l-ankle; P = 0.01), DD (r-knee, P = 0.01; r-hip; P = 0.01), and C (r-shoulder, P = 0.02). Conclusions: Results suggest that a regular yoga practice may increase the flexibility and balance as well as whole body measures of male college athletes and therefore, may enhance athletic performances that require these characteristics. PMID:26865768
ERIC Educational Resources Information Center
Ashburn, Jennifer; Ayers, Mary Jane; Born-Ozment, Susan; Karsten, Jayne; Maeda, Sheri
This 10-week middle school curriculum unit for grades 6-8, integrating concepts, materials, and content from language arts, music, and visual arts, provides a set of specific instructional plans relative to the study of myths (often a content area in middle school grades across the country). All the sample lessons and examples in the curriculum…
Kosmidis, Christophoros; Pantos, George; Efthimiadis, Christopher; Gkoutziomitrou, Ioanna; Georgakoudi, Eleni; Anthimidis, George
2015-01-01
Patient: Female, 31 Final Diagnosis: Acute abdomen due to pedunculated uterine leiomyoma in torsion Symptoms: Abdominal pain • vomiting Medication: Cefoxitin 2gr Clinical Procedure: Laparoscopic excision of the pendunculated uterine leiomyoma – laparoscopic appedicectomy Specialty: Surgery Objective: Unusual clinical course Background: Pregnancy outcomes after laparoscopic myomectomy are generally favorable, with a pregnancy rate that is comparable to or even higher than the rate associated with abdominal myomectomy. The purpose of this article is to present the case of a pregnant patient at 10 weeks of gestation who was submitted to successful laparoscopic myomectomy of a twisted pedunculated uterine leiomyoma. Case Report: A 31 year-old Greek pregnant woman complaining about acute abdominal pain was submitted to diagnostic laparoscopy which revealed a huge twisted uterine leiomyoma. Subsequently laparoscopic myomectomy was successfully carried out. Conclusions: Laparoscopic myomectomy is a technically challenging procedure with surgeon-specific limitations. Laparoscopy during pregnancy should be performed with utmost care and it proves to be a safe and effective procedure in hands of clinicians with sufficient experience in laparoscopic surgery. PMID:26227425
Tolnai, Nóra; Szabó, Zsófia; Köteles, Ferenc; Szabo, Attila
2016-09-01
Pilates exercises have several demonstrated physical and psychological benefits. To date, most research in this context was conducted with symptomatic or elderly people with few dependent measures. The current study examined the chronic or longitudinal effects of very low frequency, once a week, Pilates training on several physical and psychological measures, over a 10-week intervention, in young, healthy, and sedentary women. Further, the study gauged the acute effects of Pilates exercises on positive- and negative affect in 10 exercise sessions. Compared to a control group, the Pilates group exhibited significant improvements in skeletal muscle mass, flexibility, balance, core- and abdominal muscle strength, body awareness, and negative affect. This group also showed favorable changes in positive (22.5% increase) and negative affect (12.2% decrease) in nine out of ten exercise sessions. This work clearly demonstrates the acute and chronic benefits of Pilates training on both physical and psychological measures. It also reveals that even only once a week Pilates training is enough to trigger detectable benefits in young sedentary women. While this frequency is below the required levels of exercise for health, it may overcome the 'lack of time' excuse for not exercising and subsequently its tangible benefits may positively influence one's engagement in more physical activity. PMID:27195456
de Hoyo, Moisés; Sañudo, Borja; Carrasco, Luis; Mateo-Cortes, Jesús; Domínguez-Cobo, Sergio; Fernandes, Orlando; Del Ojo, Juan J; Gonzalo-Skok, Oliver
2016-07-01
The aim of the current study was to analyse the effect of 10-week eccentric overload training on kinetic parameters during change of direction (COD) in U-19 football players. The outcome measured included relative peak braking (rPB) and propulsive force (rPF), contact time (CT), time spent during braking (BT) and propulsive phase (PT), relative total (rTOT_IMP), braking (rB_IMP) and propulsive (rP_IMP) impulses. Between-group results showed a substantial better improvement (likely) in CT (ES: 0.72) and BT (ES: 0.74) during side-step cutting, and in rPB (ES: 0.84) and rB_IMP (ES: 0.72) during crossover cutting, in the experimental group (EXP) in comparison to control group (CON). Within-group analysis showed a substantially better performance (likely to almost certain) in CT (ES: 1.19), BT (ES: 1.24), PT (ES: 0.70), rPB (ES: 0.75), rPF (ES: 0.68), rTOT_IMP (ES: 0.48) and rB_IMP (ES: 0.50) in EXP during side-step cutting. Regarding crossover cutting, within-group analysis showed a substantial better performance (likely to almost certain) in CT (ES: 0.75), rPB (ES: 0.75), rPF (ES: 1.34), rTOT_IMP (ES: 0.61), rB_IMP (ES: 0.76) and rP_IMP (ES: 0.46) in EXP. In conclusion, the eccentric overload-based programme led to an improvement in kinetic parameters during COD football tasks. PMID:26963941
Effect of a 10-week yoga programme on the quality of life of women after breast cancer surgery
Merecz, Dorota; Wójcik, Aleksandra; Świątkowska, Beata; Sierocka, Kamilla; Najder, Anna
2014-01-01
Aim of the study The following research is aimed at determining the effect of yoga on the quality of life of women after breast cancer surgery. Material and methods A 10-week yoga programme included 90-minute yoga lessons once a week. To estimate the quality of life, questionnaires developed by the European Organisation for Research and Treatment of Cancer (QLQ-C30 and QLQ-BR23) were used. An experimental group consisted of 12 women who practised yoga, a control group – of 16 women who did not. Between groups there were no differences in age, time from operation and characteristics associated with disease, treatment and participation in rehabilitation. Results Our results revealed an improvement of general health and quality of life, physical and social functioning as well as a reduction of difficulties in daily activities among exercising women. Also their future prospects enhanced – they worried less about their health than they used to before participating in the programme. As compared to baseline, among exercising women, fatigue, dyspnoea and discomfort (pain, swelling, sensitivity) in the arm and breast on the operated side decreased. Conclusions Participation in the exercising programme resulted in an improvement of physical functioning, reduction of fatigue, dyspnoea, and discomfort in the area of the breast and arm on the operated side. Based on our results and those obtained in foreign studies, we conclude that rehabilitation with the use of yoga practice improves the quality of life of the patients after breast cancer surgery. However, we recommend further research on this issue in Poland. PMID:26327853
Paulsen, G; Hamarsland, H; Cumming, K T; Johansen, R E; Hulmi, J J; Børsheim, E; Wiig, H; Garthe, I; Raastad, T
2014-01-01
This study investigated the effects of vitamin C and E supplementation on acute responses and adaptations to strength training. Thirty-two recreationally strength-trained men and women were randomly allocated to receive a vitamin C and E supplement (1000 mg day−1 and 235 mg day−1, respectively), or a placebo, for 10 weeks. During this period the participants’ training involved heavy-load resistance exercise four times per week. Muscle biopsies from m. vastus lateralis were collected, and 1 repetition maximum (1RM) and maximal isometric voluntary contraction force, body composition (dual-energy X-ray absorptiometry), and muscle cross-sectional area (magnetic resonance imaging) were measured before and after the intervention. Furthermore, the cellular responses to a single exercise session were assessed midway in the training period by measurements of muscle protein fractional synthetic rate and phosphorylation of several hypertrophic signalling proteins. Muscle biopsies were obtained from m. vastus lateralis twice before, and 100 and 150 min after, the exercise session (4 × 8RM, leg press and knee-extension). The supplementation did not affect the increase in muscle mass or the acute change in protein synthesis, but it hampered certain strength increases (biceps curl). Moreover, increased phosphorylation of p38 mitogen-activated protein kinase, Extracellular signal-regulated protein kinases 1 and 2 and p70S6 kinase after the exercise session was blunted by vitamin C and E supplementation. The total ubiquitination levels after the exercise session, however, were lower with vitamin C and E than placebo. We concluded that vitamin C and E supplementation interfered with the acute cellular response to heavy-load resistance exercise and demonstrated tentative long-term negative effects on adaptation to strength training. PMID:25384788
ERIC Educational Resources Information Center
Smith, Sue
2002-01-01
A 10-week stress management and relaxation course helped anxious students develop skills and strategies derived from self-awareness. Course included stress theory, organizational skills (time management, goal setting), personal transformation, tolerance for uncertainty, and metacognition, with an emphasis on self-efficacy and autonomy. (Contains…
ERIC Educational Resources Information Center
Adams, Gilbert L.
2013-01-01
This ex post facto comparison study of a postsecondary apprenticeship program at a naval ship construction company examined 8 years of academic performance and program completion data for two curricular formats: a 15-week traditional group (1,259 apprentices) and a 10-week accelerated group (736 apprentices). The two groups were investigated to…
Sofianidis, Giorgos; Hatzitaki, Vassilia; Douka, Stella; Grouios, Giorgos
2009-04-01
This preliminary study examined the effect of a 10-wk traditional Greek dance program on static and dynamic balance indices in healthy elderly adults. Twenty-six community-dwelling older adults were randomly assigned to either an intervention group who took supervised Greek traditional dance classes for 10 wk (1 hr, 2 sessions/week, n = 14), or a control group (n = 12). Balance was assessed pre- and postintervention by recording the center-of-pressure (COP) variations and trunk kinematics during performance of the Sharpened-Romberg test, 1-leg (OL) stance, and dynamic weight shifting (WS). After practice, the dance group significantly decreased COP displacement and trunk sway in OL stance. A significant increase in the range of trunk rotation was noted during performance of dynamic WS in the sagittal and frontal planes. These findings support the use of traditional dance as an effective means of physical activity for improving static and dynamic balance control in the elderly. PMID:19451666
Douglas, S L; Wellock, I; Edwards, S A; Kyriazakis, I
2014-10-01
Piglets born with low birth weights (LBiW) are likely to be lighter at weaning. Starter regimes tailored for pigs of average BW therefore may not be optimal for LBiW nursery performance. The objective was to determine if LBiW pigs benefit from a high specification starter regime and the provision of extra feed (additional allowance of last phase diet of the starter regime) in comparison to a standard commercial regime. Additionally, the effect of starter regime on performance of normal birth weight (NBiW) pigs at weaning was determined and compared to that of LBiW pigs. Finally, the cost effectiveness of the treatments was determined. The experiment was therefore an incomplete 2 × 2 × 2 factorial design, as the provision of extra feed was given only to LBiW pigs (n = 6 replicates per treatment; 5 pigs per replicate). Treatments comprised birth weight (LBiW or NBiW), starter regime (high specification [HS] or standard starter [SS]), and extra feed 3 quantity (yes [YF] or no [NF], for LBiW pigs only; feed 3 corresponded to the last phase diet of the starter regime). At weaning (d 28), pigs were randomly assigned within each birth weight category to treatment groups. Nutritional treatments were fed ad libitum on a kilogram/head basis for approximately 3 wk followed by a common weaner diet fed ad libitum until d 70. Starter regime (P = 0.019), feed 3 amount (P = 0.010), and their interaction (P = 0.029) had an effect on ADG of LBiW pigs from d 28 to 49, with pigs on HS followed by YF (HY) performing best. An improvement in feed conversion ratio (FCR) was noted between d 28 and 49 for pigs fed the additional feed 3 (P = 0.030); between d 49 and 70, the only residual effect seen was of starter regime (P = 0.017) on ADG. In contrast, there was no significant effect of starter regime from d 28 to 70 on ADG, ADFI, or FCR of NBiW pigs. By d 49 and 70, LBiW pigs on regime HY weighed the same as NBiW pigs (d 70 BW; 30.0 vs. 30.6 kg; P = 0.413), with similar growth rates from
2010-01-01
Background We previously determined that a weight-maintenance, non-ketogenic diet containing 30% carbohydrate (CHO), 30% protein, 40% fat, (30:30:40) (LoBAG30) decreased glycated hemoglobin (%tGHb) from 10.8 to 9.1% over a 5 week period in subjects with untreated type 2 diabetes. Both the fasting glucose and postprandial glucose area were decreased. Our objective in the present 10-week study was to determine: 1) whether the above results could be maintained, or even improved (suggesting a metabolic adaptation) and 2) whether the subjects would accept the diet for this longer time period. In addition, protein balance, and a number of other blood and urine constituents were quantified at 5 and at 10 weeks on the LoBAG30 diet to address metabolic adaptation. Methods Eight men with untreated type 2 diabetes were studied over a 10-week period. Blood was drawn and urine was collected over a 24 hour period at the beginning of the study with subjects ingesting a standard diet of 55% CHO, 15% protein, 30% fat, and at the end of 5 and 10 weeks following ingestion of a LoBAG30 diet. Results Body weight was stable. Fasting glucose decreased by 19% at week 5 and 28% at week 10; 24-h total glucose area decreased by 27% at week 5 and 35% at week 10 compared to baseline. Insulin did not change. Mean %tGHb decreased by 13% at week 5, 25% at week 10, and was still decreasing linearly, indicating that a metabolic adaptation occurred. Serum NEFA, AAN, uric acid, urea, albumin, prealbumin, TSH, Total T3, free T4, B12, folate, homocysteine, creatinine, growth hormone and renin did not differ between weeks 5 and 10. IGF-1 increased modestly. Urinary glucose decreased; urinary pH and calcium were similar. Conclusions A LoBAG30 diet resulted in continued improvement in glycemic control. This improvement occurred without significant weight loss, with unchanged insulin and glucagon profiles, and without deterioration in serum lipids, blood pressure or kidney function. Extending the duration
Kendrick, Iain P; Harris, Roger C; Kim, Hyo Jeong; Kim, Chang Keun; Dang, Viet H; Lam, Thanh Q; Bui, Toai T; Smith, Marcus; Wise, John A
2008-05-01
Carnosine (Carn) occurs in high concentrations in skeletal muscle is a potent physico-chemical buffer of H+ over the physiological range. Recent research has demonstrated that 6.4 g x day(-1) of beta-alanine (beta-ala) can significantly increase skeletal muscle Carn concentrations (M-[Carn]) whilst the resultant change in buffering capacity has been shown to be paralleled by significant improvements in anaerobic and aerobic measures of exercise performance. Muscle carnosine increase has also been linked to increased work done during resistance training. Prior research has suggested that strength training may also increase M-[Carn] although this is disputed by other studies. The aim of this investigation is to assess the effect of 10 weeks resistance training on M-[Carn], and, secondly, to investigate if increased M-[Carn] brought about through beta-ala supplementation had a positive effect on training responses. Twenty-six Vietnamese sports science students completed the study. The subjects completed a 10-week resistance-training program whilst consuming 6.4 g x day(-1) of beta-ala (beta-ALG) or a matched dose of a placebo (PLG). Subjects were assessed prior to and after training for whole body strength, isokinetic force production, muscular endurance, body composition. beta-Alanine supplemented subjects increased M-[Carn] by 12.81 +/- 7.97 mmol x kg(-1) dry muscle whilst there was no change in PLG subjects. There was no significant effect of beta-ala supplementation on any of the exercise parameters measured, mass or % body fat. In conclusion, 10 weeks of resistance training alone did not change M-[Carn]. PMID:18175046
Burke, David T; Tran, David; Cui, Di; Burke, Daniel P; Al-Adawi, Samir; Dorvlo, Atsu Ss
2013-01-01
In an age of increasing numbers of lifestyle diseases and plasticity of longevity, exercise and weight training have been increasingly recognized as both preventing and mitigating the severity of many illnesses. This study was designed to determine whether significant weight-lifting gains could be realized through the Anatoly Gravitational System. Specifically, this study sought to determine whether this once-weekly weight-training system could result in significant weekly strength gains during a 10-week training period. A total of 50 participants, ranging in age from 17 to 67 years, completed at least 10 weekly 30-minute training sessions. The results suggest participants could, on average, double their weight-lifting capacity within 10 sessions. This preliminary study, which would require further scrutiny, suggests the Anatoly Gravitational System provides a rather unique opportunity to load the musculoskeletal system with extremely high loads, with rapid weekly weight gains, using only short weekly training sessions. More studies are warranted to scrutinize these findings. PMID:24379727
Strzelecki, Dominik; Tabaszewska, Agnieszka; Barszcz, Zbigniew; Józefowicz, Olga; Kropiwnicki, Paweł; Rabe-Jabłońska, Jolanta
2013-12-01
Memantine and other glutamatergic agents have been currently investigated in some off-label indications due to glutamatergic involvement in several psychoneurological disorders. We assumed that memantine similarly to ketamine may positively influence mood, moreover having a potential to improve cognition and general quality of life. We report a case of a 49-year-old male hospitalized during a manic and a subsequent moderate depressive episode. After an ineffective use of lithium, olanzapine and antidepressive treatment with mianserin, memantine was added up to 20 mg per day for 10 weeks. The mental state was assessed using the Hamilton Depression Rating Scale, the Young Mania Rating Scale, the Hamilton Anxiety Scale, the Clinical Global Inventory, the World Health Organization Quality of Life Scale and psychological tests. After 10 weeks the patient achieved a partial symptomatic improvement in mood, anxiety and quality of sleep, but his activity remained insufficient. We also observed an improvement in the parameters of cognitive functioning and quality of life. There was neither significant mood variations during the memantine use nor mood changes after its termination. No significant side effects were noted during the memantine treatment. We conclude that using memantine in bipolar depression may improve mood, cognitive functioning and quality of life. PMID:24474993
Liakopoulou, A.; Buttar, H.S.; Nera, E.A.; Fernando, L. )
1989-01-01
Offspring of mice treated with cyclophosphamide (Cy; 1, 2.5 or 5 mg/kg) during pregnancy (6-18 days of gestation) and tested for immunocompetence from 5 to 10 weeks of age were found to have defective reticuloendothelial clearance. The main effects were: (a) increased elimination half time (T 1/2) of {sup 51}Cr-labeled SRBC from circulation, (b) decreased liver uptake of {sup 51}Cr and (c) impaired ability of the spleen, mostly affecting the female pups, to compensate for decreased liver uptake. The highest dose group suffered the most pronounced effects. This group was also found to have increased IgG immunoglobulin levels at 7 weeks of age. IgG antibody production in response to specific antigenic stimulation and delayed hypersensitivity reactions to oxazolone did not appear to be affected by Cy treatment.
Rave, Klaus; Roggen, Kerstin; Dellweg, Sibylle; Heise, Tim; tom Dieck, Heike
2007-11-01
Subjects with obesity and elevated fasting blood glucose are at high risk of developing type 2 diabetes which may be reduced by a dietary intervention leading to an improvement of insulin resistance. We investigated the potential of a whole-grain based dietary product (WG) with reduced starch content derived from double-fermented wheat during a hypo-energetic diet to positively influence body weight, fasting blood glucose, insulin resistance and lipids in comparison to a nutrient-dense meal replacement product (MR) in a randomized two-way cross-over study with two 4-week treatment periods separated by a 2-week wash-out. Subjects replaced at least two daily meals with WG and MR, respectively, targeting for a consumption of 200 g of either product per day. Total daily energy intake was limited to 7120 kJ. Thirty-one subjects (BMI 33.9 (SD 2.7) kg/m2, fasting blood glucose 6.3 (SD 0.8) mmol/l) completed the study. In both treatment groups body weight (-2.5 (SD 2.0) v. - 3.2 (SD 1.6) kg for WG v. MR), fasting blood glucose (-0.4 (SD 0.3) v. -0.5 (SD 0.5) mmol/l), total cholesterol (-0.5 (SD 0.5) v. -0.6 (SD 0.5) mmol/l), TAG (-0.3 (SD 0.9) v. -0.3 (SD 1.2) mmol/l) and homeostasis model assessment (HOMA) insulin resistance score (-0.7 (SD 0.8) v. -1.1 (SD 1.7) microU/ml x mmol/l) improved (P < 0.05) with no significant differences between the treatments. After statistical adjustment for the amount of body weight lost, however, the comparison between both groups revealed that fasting serum insulin (P = 0.031) and HOMA insulin resistance score (P = 0.049) improved better with WG than with MR. We conclude that WG favourably influences metabolic risk factors for type 2 diabetes independent from the amount of body weight lost during a hypo-energetic diet. PMID:17562226
NASA Astrophysics Data System (ADS)
Myrbo, A.; Howes, T.; Thompson, R.; Drake, C.; Woods, P.; Schuldt, N.; Borkholder, B.; Marty, J.; Lafrancois, T.; Pellerin, H.
2012-12-01
. The two-week "mini-REU" was designed to attract students with little or no independent research experience, who might be intimidated by applying for a ten-week internship away from home (but who might apply for one after completing a good mini-REU). The arc of research, from site selection to field work and lab work to data interpretation and poster presentation, must be encompassed in these brief projects, so group projects with clear goals are best suited for mini-REUs. The May 2012 project, with twelve students in four research proxy groups (charcoal, phytoliths, plant macrofossils, and zooplankton), demonstrated that a FDL lake, Rice Portage, had extensive wild rice habitat prior to early 20th-century Euroamerican ditching; this proof was required in order for FDL to gain a permit from the Army Corps of Engineers to raise the lake level as part of a wild rice restoration effort. Each proxy group had one research advisor (a graduate student or soft money researcher), plus one UMN über-advisor for the project as a whole, as well as the Fond du Lac resource manager. All of these advisors also work with the 10-week interns throughout the summer.
Erceg, David N.; Anderson, Lindsey J.; Nickles, Chun M.; Lane, Christianne J.; Weigensberg, Marc J.; Schroeder, E. Todd
2015-01-01
Purpose: With the childhood obesity epidemic, efficient methods of exercise are sought to improve health. We tested whether whole body vibration (WBV) exercise can positively affect bone metabolism and improve insulin/glucose dynamics in sedentary overweight Latino boys. Methods: Twenty Latino boys 8-10 years of age were randomly assigned to either a control (CON) or 3 days/wk WBV exercise (VIB) for 10-wk. Results: Significant increases in BMC (4.5±3.2%; p=0.01) and BMD (1.3±1.3%; p<0.01) were observed for the VIB group when compared to baseline values. For the CON group BMC significantly increased (2.0±2.2%; p=0.02), with no change in BMD (0.8±1.3%; p=0.11). There were no significant between group changes in BMC or BMD. No significant change was observed for osteocalcin and (collagen type I C-telopeptide) CTx for the VIB group. However, osteocalcin showed a decreasing trend (p=0.09) and CTx significantly increased (p<0.03) for the CON group. This increase in CTx was significantly different between groups (p<0.02) and the effect size of between-group difference in change was large (-1.09). There were no significant correlations between osteocalcin and measures of fat mass or insulin resistance for collapsed data. Conclusion: Although bone metabolism was altered by WBV training, no associations were apparent between osteocalcin and insulin resistance. These findings suggest WBV exercise may positively increase BMC and BMD by decreasing bone resorption in overweight Latino boys. PMID:26078710
NASA Astrophysics Data System (ADS)
Cooper, Colin; Frieze, Alan
The aim of this article is to discuss some of the notions and applications of random walks on finite graphs, especially as they apply to random graphs. In this section we give some basic definitions, in Section 2 we review applications of random walks in computer science, and in Section 3 we focus on walks in random graphs.
ERIC Educational Resources Information Center
Molina, Brooke S. G.; Flory, Kate; Bukstein, Oscar G.; Greiner, Andrew R.; Baker, Jennifer L.; Krug, Vicky; Evans, Steven W.
2008-01-01
Objective: This pilot study tests the feasibility and preliminary efficacy of an after-school treatment program for middle schoolers with ADHD using a randomized clinical trial design. Method: A total of 23 students with ADHD (25% female, 48% African American) from a large public middle school were randomly assigned to a 10-week program or to…
NASA Astrophysics Data System (ADS)
ajansen; kwhitefoot; panteltje1; edprochak; sudhakar, the
2014-07-01
In reply to the physicsworld.com news story “How to make a quantum random-number generator from a mobile phone” (16 May, http://ow.ly/xFiYc, see also p5), which describes a way of delivering random numbers by counting the number of photons that impinge on each of the individual pixels in the camera of a Nokia N9 smartphone.
NASA Technical Reports Server (NTRS)
Messaro. Semma; Harrison, Phillip
2010-01-01
Ares I Zonal Random vibration environments due to acoustic impingement and combustion processes are develop for liftoff, ascent and reentry. Random Vibration test criteria for Ares I Upper Stage pyrotechnic components are developed by enveloping the applicable zonal environments where each component is located. Random vibration tests will be conducted to assure that these components will survive and function appropriately after exposure to the expected vibration environments. Methodology: Random Vibration test criteria for Ares I Upper Stage pyrotechnic components were desired that would envelope all the applicable environments where each component was located. Applicable Ares I Vehicle drawings and design information needed to be assessed to determine the location(s) for each component on the Ares I Upper Stage. Design and test criteria needed to be developed by plotting and enveloping the applicable environments using Microsoft Excel Spreadsheet Software and documenting them in a report Using Microsoft Word Processing Software. Conclusion: Random vibration liftoff, ascent, and green run design & test criteria for the Upper Stage Pyrotechnic Components were developed by using Microsoft Excel to envelope zonal environments applicable to each component. Results were transferred from Excel into a report using Microsoft Word. After the report is reviewed and edited by my mentor it will be submitted for publication as an attachment to a memorandum. Pyrotechnic component designers will extract criteria from my report for incorporation into the design and test specifications for components. Eventually the hardware will be tested to the environments I developed to assure that the components will survive and function appropriately after exposure to the expected vibration environments.
10 Days or 10 Weeks: Immersion Programs That Work.
ERIC Educational Resources Information Center
Troiani, Elisa A.
The foreign language faculty at College of Saint Scholastica Minnesota) developed and implemented 10-day Spanish and French immersion programs based on Peace Corps methodology as a means of affording students time for intensive study of those languages, improving students' fluency, and instituting a change in teaching methodology. The first…
2016-01-01
Background The need for accessible and motivating treatment approaches within mental health has led to the development of an Internet-based serious game intervention (called “Plan-It Commander”) as an adjunct to treatment as usual for children with attention-deficit/hyperactivity disorder (ADHD). Objective The aim was to determine the effects of Plan-It Commander on daily life skills of children with ADHD in a multisite randomized controlled crossover open-label trial. Methods Participants (N=170) in this 20-week trial had a diagnosis of ADHD and ranged in age from 8 to 12 years (male: 80.6%, 137/170; female: 19.4%, 33/170). They were randomized to a serious game intervention group (group 1; n=88) or a treatment-as-usual crossover group (group 2; n=82). Participants randomized to group 1 received a serious game intervention in addition to treatment as usual for the first 10 weeks and then received treatment as usual for the next 10 weeks. Participants randomized to group 2 received treatment as usual for the first 10 weeks and crossed over to the serious game intervention in addition to treatment as usual for the subsequent 10 weeks. Primary (parent report) and secondary (parent, teacher, and child self-report) outcome measures were administered at baseline, 10 weeks, and 10-week follow-up. Results After 10 weeks, participants in group 1 compared to group 2 achieved significantly greater improvements on the primary outcome of time management skills (parent-reported; P=.004) and on secondary outcomes of the social skill of responsibility (parent-reported; P=.04), and working memory (parent-reported; P=.02). Parents and teachers reported that total social skills improved over time within groups, whereas effects on total social skills and teacher-reported planning/organizing skills were nonsignificant between groups. Within group 1, positive effects were maintained or further improved in the last 10 weeks of the study. Participants in group 2, who played the
Wang, Sijian; Nan, Bin; Rosset, Saharon; Zhu, Ji
2011-03-01
We propose a computationally intensive method, the random lasso method, for variable selection in linear models. The method consists of two major steps. In step 1, the lasso method is applied to many bootstrap samples, each using a set of randomly selected covariates. A measure of importance is yielded from this step for each covariate. In step 2, a similar procedure to the first step is implemented with the exception that for each bootstrap sample, a subset of covariates is randomly selected with unequal selection probabilities determined by the covariates' importance. Adaptive lasso may be used in the second step with weights determined by the importance measures. The final set of covariates and their coefficients are determined by averaging bootstrap results obtained from step 2. The proposed method alleviates some of the limitations of lasso, elastic-net and related methods noted especially in the context of microarray data analysis: it tends to remove highly correlated variables altogether or select them all, and maintains maximal flexibility in estimating their coefficients, particularly with different signs; the number of selected variables is no longer limited by the sample size; and the resulting prediction accuracy is competitive or superior compared to the alternatives. We illustrate the proposed method by extensive simulation studies. The proposed method is also applied to a Glioblastoma microarray data analysis. PMID:22997542
Is random access memory random?
NASA Technical Reports Server (NTRS)
Denning, P. J.
1986-01-01
Most software is contructed on the assumption that the programs and data are stored in random access memory (RAM). Physical limitations on the relative speeds of processor and memory elements lead to a variety of memory organizations that match processor addressing rate with memory service rate. These include interleaved and cached memory. A very high fraction of a processor's address requests can be satified from the cache without reference to the main memory. The cache requests information from main memory in blocks that can be transferred at the full memory speed. Programmers who organize algorithms for locality can realize the highest performance from these computers.
Random broadcast on random geometric graphs
Bradonjic, Milan; Elsasser, Robert; Friedrich, Tobias
2009-01-01
In this work, we consider the random broadcast time on random geometric graphs (RGGs). The classic random broadcast model, also known as push algorithm, is defined as: starting with one informed node, in each succeeding round every informed node chooses one of its neighbors uniformly at random and informs it. We consider the random broadcast time on RGGs, when with high probability: (i) RGG is connected, (ii) when there exists the giant component in RGG. We show that the random broadcast time is bounded by {Omicron}({radical} n + diam(component)), where diam(component) is a diameter of the entire graph, or the giant component, for the regimes (i), or (ii), respectively. In other words, for both regimes, we derive the broadcast time to be {Theta}(diam(G)), which is asymptotically optimal.
Quantumness, Randomness and Computability
NASA Astrophysics Data System (ADS)
Solis, Aldo; Hirsch, Jorge G.
2015-06-01
Randomness plays a central role in the quantum mechanical description of our interactions. We review the relationship between the violation of Bell inequalities, non signaling and randomness. We discuss the challenge in defining a random string, and show that algorithmic information theory provides a necessary condition for randomness using Borel normality. We close with a view on incomputablity and its implications in physics.
How random is a random vector?
NASA Astrophysics Data System (ADS)
Eliazar, Iddo
2015-12-01
Over 80 years ago Samuel Wilks proposed that the "generalized variance" of a random vector is the determinant of its covariance matrix. To date, the notion and use of the generalized variance is confined only to very specific niches in statistics. In this paper we establish that the "Wilks standard deviation" -the square root of the generalized variance-is indeed the standard deviation of a random vector. We further establish that the "uncorrelation index" -a derivative of the Wilks standard deviation-is a measure of the overall correlation between the components of a random vector. Both the Wilks standard deviation and the uncorrelation index are, respectively, special cases of two general notions that we introduce: "randomness measures" and "independence indices" of random vectors. In turn, these general notions give rise to "randomness diagrams"-tangible planar visualizations that answer the question: How random is a random vector? The notion of "independence indices" yields a novel measure of correlation for Lévy laws. In general, the concepts and results presented in this paper are applicable to any field of science and engineering with random-vectors empirical data.
Emery, C F; Schein, R L; Hauck, E R; MacIntyre, N R
1998-05-01
Exercise rehabilitation is recommended increasingly for patients with chronic obstructive pulmonary disease (COPD). This study examined the effect of exercise and education on 79 older adults (M age = 66.6 +/- 6.5 years; 53% female) with COPD, randomly assigned to 10 weeks of (a) exercise, education, and stress management (EXESM; n = 29); (b) education and stress management (ESM; n = 25); or (c) waiting list (WL; n = 25). EXESM included 37 sessions of exercise, 16 educational lectures, and 10 weekly stress management classes. ESM included only the 16 lectures and 10 stress management classes. Before and after the intervention, assessments were conducted of physiological functioning (pulmonary function, exercise endurance), psychological well-being (depression, anxiety, quality of life), and cognitive functioning (attention, motor speed, mental efficiency, verbal processing). Repeated measures multivariate analysis of variance indicated that EXESM participants experienced changes not observed among ESM and WL participants, including improved endurance, reduced anxiety, and improved cognitive performance (verbal fluency). PMID:9619472
Comparing MTI randomization procedures to blocked randomization.
Berger, Vance W; Bejleri, Klejda; Agnor, Rebecca
2016-02-28
Randomization is one of the cornerstones of the randomized clinical trial, and there is no shortage of methods one can use to randomize patients to treatment groups. When deciding which one to use, researchers must bear in mind that not all randomization procedures are equally adept at achieving the objective of randomization, namely, balanced treatment groups. One threat is chronological bias, and permuted blocks randomization does such a good job at controlling chronological bias that it has become the standard randomization procedure in clinical trials. But permuted blocks randomization is especially vulnerable to selection bias, so as a result, the maximum tolerated imbalance (MTI) procedures were proposed as better alternatives. In comparing the procedures, we have somewhat of a false controversy, in that actual practice goes uniformly one way (permuted blocks), whereas scientific arguments go uniformly the other way (MTI procedures). There is no argument in the literature to suggest that the permuted block design is better than or even as good as the MTI procedures, but this dearth is matched by an equivalent one regarding actual trials using the MTI procedures. So the 'controversy', if we are to call it that, pits misguided precedent against sound advice that tends to be ignored in practice. We shall review the issues to determine scientifically which of the procedures is better and, therefore, should be used. PMID:26337607
ERIC Educational Resources Information Center
De Boeck, Paul
2008-01-01
It is common practice in IRT to consider items as fixed and persons as random. Both, continuous and categorical person parameters are most often random variables, whereas for items only continuous parameters are used and they are commonly of the fixed type, although exceptions occur. It is shown in the present article that random item parameters…
NASA Technical Reports Server (NTRS)
Erdmann, Michael
1992-01-01
This paper investigates the role of randomization in the solution of robot manipulation tasks. One example of randomization is shown by the strategy of shaking a bin holding a part in order to orient the part in a desired stable state with some high probability. Randomization can be useful for mobile robot navigation and as a means of guiding the design process.
Quantum random number generation
Ma, Xiongfeng; Yuan, Xiao; Cao, Zhu; Zhang, Zhen; Qi, Bing
2016-01-01
Here, quantum physics can be exploited to generate true random numbers, which play important roles in many applications, especially in cryptography. Genuine randomness from the measurement of a quantum system reveals the inherent nature of quantumness — coherence, an important feature that differentiates quantum mechanics from classical physics. The generation of genuine randomness is generally considered impossible with only classical means. Based on the degree of trustworthiness on devices, quantum random number generators (QRNGs) can be grouped into three categories. The first category, practical QRNG, is built on fully trusted and calibrated devices and typically can generate randomness at a high speed by properly modeling the devices. The second category is self-testing QRNG, where verifiable randomness can be generated without trusting the actual implementation. The third category, semi-self-testing QRNG, is an intermediate category which provides a tradeoff between the trustworthiness on the device and the random number generation speed.
Quantum random number generation
Ma, Xiongfeng; Yuan, Xiao; Cao, Zhu; Zhang, Zhen; Qi, Bing
2016-06-28
Here, quantum physics can be exploited to generate true random numbers, which play important roles in many applications, especially in cryptography. Genuine randomness from the measurement of a quantum system reveals the inherent nature of quantumness -- coherence, an important feature that differentiates quantum mechanics from classical physics. The generation of genuine randomness is generally considered impossible with only classical means. Based on the degree of trustworthiness on devices, quantum random number generators (QRNGs) can be grouped into three categories. The first category, practical QRNG, is built on fully trusted and calibrated devices and typically can generate randomness at amore » high speed by properly modeling the devices. The second category is self-testing QRNG, where verifiable randomness can be generated without trusting the actual implementation. The third category, semi-self-testing QRNG, is an intermediate category which provides a tradeoff between the trustworthiness on the device and the random number generation speed.« less
Blocked randomization with randomly selected block sizes.
Efird, Jimmy
2011-01-01
When planning a randomized clinical trial, careful consideration must be given to how participants are selected for various arms of a study. Selection and accidental bias may occur when participants are not assigned to study groups with equal probability. A simple random allocation scheme is a process by which each participant has equal likelihood of being assigned to treatment versus referent groups. However, by chance an unequal number of individuals may be assigned to each arm of the study and thus decrease the power to detect statistically significant differences between groups. Block randomization is a commonly used technique in clinical trial design to reduce bias and achieve balance in the allocation of participants to treatment arms, especially when the sample size is small. This method increases the probability that each arm will contain an equal number of individuals by sequencing participant assignments by block. Yet still, the allocation process may be predictable, for example, when the investigator is not blind and the block size is fixed. This paper provides an overview of blocked randomization and illustrates how to avoid selection bias by using random block sizes. PMID:21318011
2012-01-01
Background Clinical experience suggests that many patients with Modic changes have relatively severe and persistent low back pain (LBP), which typically appears to be resistant to treatment. Exercise therapy is the recommended treatment for chronic LBP, however, due to their underlying pathology, Modic changes might be a diagnostic subgroup that does not benefit from exercise. The objective of this study was to compare the current state-of-the art treatment approach (exercise and staying active) with a new approach (load reduction and daily rest) for people with Modic changes using a randomized controlled trial design. Methods Participants were patients from an outpatient clinic with persistent LBP and Modic changes. They were allocated using minimization to either rest therapy for 10 weeks with a recommendation to rest for two hours daily and the option of using a flexible lumbar belt or exercise therapy once a week for 10 weeks. Follow-up was at 10 weeks after recruitment and 52 weeks after intervention and the clinical outcome measures were pain, disability, general health and global assessment, supplemented by weekly information on low back problems and sick leave measured by short text message (SMS) tracking. Results In total, 100 patients were included in the study. Data on 87 patients at 10 weeks and 96 patients at one-year follow-up were available and were used in the intention-to-treat analysis. No statistically significant differences were found between the two intervention groups on any outcome. Conclusions No differences were found between the two treatment approaches, 'rest and reduced load' and 'exercise and staying active', in patients with persistent LBP and Modic changes. Trial Registration ClinicalTrials.gov: NCT00454792 PMID:22376791
Acupoint Stimulation on Weight Reduction for Obesity: A Randomized Sham-Controlled Study.
Yeh, Mei-Ling; Chu, Nain-Feng; Hsu, Man-Ying F; Hsu, Chin-Che; Chung, Yu-Chu
2015-12-01
Auricular acupoint stimulation has become a popular weight loss method. However, its efficacy for obesity treatment has not been fully studied. This study aimed to investigate the effect of a 10-week intervention of auricular electrical stimulation combined with auricular acupressure on weight reduction in obese outpatients. In this single-blind randomized sham-controlled study, 134 participants were randomly assigned to an experimental group receiving stimulation at true acupoints, or a sham group receiving stimulation delivered in the same manner but at sham acupoints. Each participant received nutrition counseling by a nutritionist weekly. The results showed significant differences in body mass index, blood pressure, total cholesterol, triglyceride, and leptin or adiponectin over time within the group, but not between the groups. This study could not exclude the effect of placebo and dietary consultation. Further study that adds a control group receiving no treatment is therefore needed to confirm the effects of auricular acupressure. PMID:25183702
Kwon, Hyuck Hoon; Yoon, Ji Young; Hong, Jong Soo; Jung, Jae Yoon; Park, Mi Sun; Suh, Dae Hun
2012-05-01
Recent studies have suggested that dietary factors, specifically glycaemic load, may be involved in the pathogenesis of acne. The aim of this study was to determine the clinical and histological effects on acne lesions of a low glycaemic load diet. A total of 32 patients with mild to moderate acne were randomly assigned to either a low glycaemic load diet or a control group diet, and completed a 10-week, parallel dietary intervention trial. Results indicate successful lowering of the glycaemic load. Subjects within the low glycaemic group demonstrated significant clinical improvement in the number of both non-inflammatory and inflammatory acne lesions. Histopathological examination of skin samples revealed several characteristics, including reduced size of sebaceous glands, decreased inflammation, and reduced expression of sterol regulatory element-binding protein-1, and interleukin-8 in the low glycaemic load group. A reduction in glycaemic load of the diet for 10 weeks resulted in improvements in acne. PMID:22678562
NASA Technical Reports Server (NTRS)
Lindsey, R. S., Jr. (Inventor)
1975-01-01
An exemplary embodiment of the present invention provides a source of random width and random spaced rectangular voltage pulses whose mean or average frequency of operation is controllable within prescribed limits of about 10 hertz to 1 megahertz. A pair of thin-film metal resistors are used to provide a differential white noise voltage pulse source. Pulse shaping and amplification circuitry provide relatively short duration pulses of constant amplitude which are applied to anti-bounce logic circuitry to prevent ringing effects. The pulse outputs from the anti-bounce circuits are then used to control two one-shot multivibrators whose output comprises the random length and random spaced rectangular pulses. Means are provided for monitoring, calibrating and evaluating the relative randomness of the generator.
NASA Astrophysics Data System (ADS)
De Gregorio, Alessandro; Orsingher, Enzo
2015-09-01
We consider random flights in reflecting on the surface of a sphere with center at the origin and with radius R, where reflection is performed by means of circular inversion. Random flights studied in this paper are motions where the orientation of the deviations are uniformly distributed on the unit-radius sphere . We obtain the explicit probability distributions of the position of the moving particle when the number of changes of direction is fixed and equal to . We show that these distributions involve functions which are solutions of the Euler-Poisson-Darboux equation. The unconditional probability distributions of the reflecting random flights are obtained by suitably randomizing n by means of a fractional-type Poisson process. Random flights reflecting on hyperplanes according to the optical reflection form are considered and the related distributional properties derived.
Quantum random number generator
Pooser, Raphael C.
2016-05-10
A quantum random number generator (QRNG) and a photon generator for a QRNG are provided. The photon generator may be operated in a spontaneous mode below a lasing threshold to emit photons. Photons emitted from the photon generator may have at least one random characteristic, which may be monitored by the QRNG to generate a random number. In one embodiment, the photon generator may include a photon emitter and an amplifier coupled to the photon emitter. The amplifier may enable the photon generator to be used in the QRNG without introducing significant bias in the random number and may enable multiplexing of multiple random numbers. The amplifier may also desensitize the photon generator to fluctuations in power supplied thereto while operating in the spontaneous mode. In one embodiment, the photon emitter and amplifier may be a tapered diode amplifier.
Autonomous Byte Stream Randomizer
NASA Technical Reports Server (NTRS)
Paloulian, George K.; Woo, Simon S.; Chow, Edward T.
2013-01-01
Net-centric networking environments are often faced with limited resources and must utilize bandwidth as efficiently as possible. In networking environments that span wide areas, the data transmission has to be efficient without any redundant or exuberant metadata. The Autonomous Byte Stream Randomizer software provides an extra level of security on top of existing data encryption methods. Randomizing the data s byte stream adds an extra layer to existing data protection methods, thus making it harder for an attacker to decrypt protected data. Based on a generated crypto-graphically secure random seed, a random sequence of numbers is used to intelligently and efficiently swap the organization of bytes in data using the unbiased and memory-efficient in-place Fisher-Yates shuffle method. Swapping bytes and reorganizing the crucial structure of the byte data renders the data file unreadable and leaves the data in a deconstructed state. This deconstruction adds an extra level of security requiring the byte stream to be reconstructed with the random seed in order to be readable. Once the data byte stream has been randomized, the software enables the data to be distributed to N nodes in an environment. Each piece of the data in randomized and distributed form is a separate entity unreadable on its own right, but when combined with all N pieces, is able to be reconstructed back to one. Reconstruction requires possession of the key used for randomizing the bytes, leading to the generation of the same cryptographically secure random sequence of numbers used to randomize the data. This software is a cornerstone capability possessing the ability to generate the same cryptographically secure sequence on different machines and time intervals, thus allowing this software to be used more heavily in net-centric environments where data transfer bandwidth is limited.
NASA Astrophysics Data System (ADS)
Chatterjee, Krishnendu; Doyen, Laurent; Gimbert, Hugo; Henzinger, Thomas A.
We consider two-player zero-sum games on graphs. These games can be classified on the basis of the information of the players and on the mode of interaction between them. On the basis of information the classification is as follows: (a) partial-observation (both players have partial view of the game); (b) one-sided complete-observation (one player has complete observation); and (c) complete-observation (both players have complete view of the game). On the basis of mode of interaction we have the following classification: (a) concurrent (players interact simultaneously); and (b) turn-based (players interact in turn). The two sources of randomness in these games are randomness in transition function and randomness in strategies. In general, randomized strategies are more powerful than deterministic strategies, and randomness in transitions gives more general classes of games. We present a complete characterization for the classes of games where randomness is not helpful in: (a) the transition function (probabilistic transition can be simulated by deterministic transition); and (b) strategies (pure strategies are as powerful as randomized strategies). As consequence of our characterization we obtain new undecidability results for these games.
NASA Astrophysics Data System (ADS)
Shivakiran Bhaktha, B. N.; Bachelard, Nicolas; Noblin, Xavier; Sebbah, Patrick
2012-10-01
Random lasing is reported in a dye-circulated structured polymeric microfluidic channel. The role of disorder, which results from limited accuracy of photolithographic process, is demonstrated by the variation of the emission spectrum with local-pump position and by the extreme sensitivity to a local perturbation of the structure. Thresholds comparable to those of conventional microfluidic lasers are achieved, without the hurdle of state-of-the-art cavity fabrication. Potential applications of optofluidic random lasers for on-chip sensors are discussed. Introduction of random lasers in the field of optofluidics is a promising alternative to on-chip laser integration with light and fluidic functionalities.
Fenimore, E.E.
1980-08-22
A hexagonally shaped quasi-random no-two-holes touching grid collimator. The quasi-random array grid collimator eliminates contamination from small angle off-axis rays by using a no-two-holes-touching pattern which simultaneously provides for a self-supporting array increasng throughput by elimination of a substrate. The presentation invention also provides maximum throughput using hexagonally shaped holes in a hexagonal lattice pattern for diffraction limited applications. Mosaicking is also disclosed for reducing fabrication effort.
Sleep Education Improves the Sleep Duration of Adolescents: A Randomized Controlled Pilot Study
Kira, Geoff; Maddison, Ralph; Hull, Michelle; Blunden, Sarah; Olds, Timothy
2014-01-01
Purpose: To determine the feasibility and pilot a sleep education program in New Zealand high school students. Methods: A parallel, two-arm randomized controlled pilot trial was conducted. High school students (13 to 16 years) were randomly allocated to either a classroom-based sleep education program intervention (n = 15) or to a usual curriculum control group (n = 14). The sleep education program involved four 50-minute classroom-based education sessions with interactive groups. Students completed a 7-day sleep diary, a sleep questionnaire (including sleep hygiene, knowledge and problems) at baseline, post-intervention (4 weeks) and 10 weeks follow-up. Results: An overall treatment effect was observed for weekend sleep duration (F1,24 = 5.21, p = 0.03). Participants in the intervention group slept longer during weekend nights at 5 weeks (1:37 h:min, p = 0.01) and 10 weeks: (1:32 h:min, p = 0.03) compared to those in the control group. No differences were found between groups for sleep duration on weekday nights. No significant differences were observed between groups for any of the secondary outcomes (sleep hygiene, sleep problems, or sleep knowledge). Conclusions: A sleep education program appears to increase weekend sleep duration in the short term. Although this program was feasible, most schools are under time and resource pressure, thus alternative methods of delivery should be assessed for feasibility and efficacy. Larger trials of longer duration are needed to confirm these findings and determine the sustained effect of sleep education on sleep behavior and its impact on health and psychosocial outcomes. Commentary: A commentary on this article appears in this issue on page 793. Citation: Kira G, Maddison R, Hull M, Blunden S, Olds T. Sleep education improves the sleep duration of adolescents: a randomized controlled pilot study. J Clin Sleep Med 2014;10(7):787-792. PMID:25024657
NASA Astrophysics Data System (ADS)
Donnelly, Isaac
Random walks on lattices are a well used model for diffusion on continuum. They have been to model subdiffusive systems, systems with forcing and reactions as well as a combination of the three. We extend the traditional random walk framework to the network to obtain novel results. As an example due to the small graph diameter, the early time behaviour of subdiffusive dynamics dominates the observed system which has implications for models of the brain or airline networks. I would like to thank the Australian American Fulbright Association.
Intermittency and random matrices
NASA Astrophysics Data System (ADS)
Sokoloff, Dmitry; Illarionov, E. A.
2015-08-01
A spectacular phenomenon of intermittency, i.e. a progressive growth of higher statistical moments of a physical field excited by an instability in a random medium, attracted the attention of Zeldovich in the last years of his life. At that time, the mathematical aspects underlying the physical description of this phenomenon were still under development and relations between various findings in the field remained obscure. Contemporary results from the theory of the product of independent random matrices (the Furstenberg theory) allowed the elaboration of the phenomenon of intermittency in a systematic way. We consider applications of the Furstenberg theory to some problems in cosmology and dynamo theory.
ERIC Educational Resources Information Center
Ben-Ari, Morechai
2004-01-01
The term "random" is frequently used in discussion of the theory of evolution, even though the mathematical concept of randomness is problematic and of little relevance in the theory. Therefore, since the core concept of the theory of evolution is the non-random process of natural selection, the term random should not be used in teaching the…
A discrete fractional random transform
NASA Astrophysics Data System (ADS)
Liu, Zhengjun; Zhao, Haifa; Liu, Shutian
2005-11-01
We propose a discrete fractional random transform based on a generalization of the discrete fractional Fourier transform with an intrinsic randomness. Such discrete fractional random transform inheres excellent mathematical properties of the fractional Fourier transform along with some fantastic features of its own. As a primary application, the discrete fractional random transform has been used for image encryption and decryption.
Uniform random number generators
NASA Technical Reports Server (NTRS)
Farr, W. R.
1971-01-01
Methods are presented for the generation of random numbers with uniform and normal distributions. Subprogram listings of Fortran generators for the Univac 1108, SDS 930, and CDC 3200 digital computers are also included. The generators are of the mixed multiplicative type, and the mathematical method employed is that of Marsaglia and Bray.
ERIC Educational Resources Information Center
Griffiths, Martin
2011-01-01
One of the author's undergraduate students recently asked him whether it was possible to generate a random positive integer. After some thought, the author realised that there were plenty of interesting mathematical ideas inherent in her question. So much so in fact, that the author decided to organise a workshop, open both to undergraduates and…
Feng Haidong; Siegel, Warren
2006-08-15
We propose some new simplifying ingredients for Feynman diagrams that seem necessary for random lattice formulations of superstrings. In particular, half the fermionic variables appear only in particle loops (similarly to loop momenta), reducing the supersymmetry of the constituents of the type IIB superstring to N=1, as expected from their interpretation in the 1/N expansion as super Yang-Mills.
Randomization and sampling issues
Geissler, P.H.
1996-01-01
The need for randomly selected routes and other sampling issues have been debated by the Amphibian electronic discussion group. Many excellent comments have been made, pro and con, but we have not reached consensus yet. This paper brings those comments together and attempts a synthesis. I hope that the resulting discussion will bring us closer to a consensus.
FitzGerald, Mary P; Anderson, Rodney U; Potts, Jeannette; Payne, Christopher K; Peters, Kenneth M; Clemens, J Quentin; Kotarinos, Rhonda; Fraser, Laura; Cosby, Annamarie; Fortman, Carole; Neville, Cynthia; Badillo, Suzanne; Odabachian, Lisa; Sanfield, Anna; O’Dougherty, Betsy; Halle-Podell, Rick; Cen, Liyi; Chuai, Shannon; Landis, J Richard; Kusek, John W; Nyberg, Leroy M
2010-01-01
Objectives To determine the feasibility of conducting a randomized clinical trial designed to compare two methods of manual therapy (myofascial physical therapy (MPT) and global therapeutic massage (GTM)) among patients with urologic chronic pelvic pain syndromes. Materials and Methods Our goal was to recruit 48 subjects with chronic prostatitis/chronic pelvic pain syndrome or interstitial cystitis/painful bladder syndrome at six clinical centers. Eligible patients were randomized to either MPT or GTM and were scheduled to receive up to 10 weekly treatments, each 1 hour in duration. Criteria to assess feasibility included adherence of therapists to prescribed therapeutic protocol as determined by records of treatment, adverse events which occurred during study treatment, and rate of response to therapy as assessed by the Patient Global Response Assessment (GRA). Primary outcome analysis compared response rates between treatment arms using Mantel-Haenszel methods. Results Twenty-three (49%) men and 24 (51%) women were randomized over a six month period. Twenty-four (51%) patients were randomized to GTM, 23 (49%) to MPT; 44 (94%) patients completed the study. Therapist adherence to the treatment protocols was excellent. The GRA response rate of 57% in the MPT group was significantly higher than the rate of 21% in the GTM treatment group (p=0.03). Conclusions The goals to judge feasibility of conducting a full-scale trial of physical therapy methods were met. The preliminary findings of a beneficial effect of MPT warrants further study. PMID:19535099
Thomas, D R; Goode, P S; LaMaster, K; Tennyson, T
1998-10-01
Aloe vera has been used for centuries as a topical treatment for various conditions and as a cathartic. An amorphous hydrogel dressing derived from the aloe plant (Carrasyn Gel Wound Dressing, Carrington Laboratories, Inc., Irving, TX) is approved by the Food and Drug Administration for the management of Stages I through IV pressure ulcers. To evaluate effectiveness of this treatment, 30 patients were randomized to receive either daily topical application of the hydrogel study dressing (acemannan hydrogel wound dressing) or a moist saline gauze dressing. Complete healing of the study ulcer occurred in 19 of 30 subjects (63%) during the 10-week observation period. No difference was observed in complete healing between the experimental and the control groups (odds ratio 0.93, 95% CI 0.16, 5.2). This study indicates that the acemannan hydrogel dressing is as effective as, but is not superior to, a moist saline gauze wound dressing for the management of pressure ulcers. PMID:10326343
Relativistic Weierstrass random walks.
Saa, Alberto; Venegeroles, Roberto
2010-08-01
The Weierstrass random walk is a paradigmatic Markov chain giving rise to a Lévy-type superdiffusive behavior. It is well known that special relativity prevents the arbitrarily high velocities necessary to establish a superdiffusive behavior in any process occurring in Minkowski spacetime, implying, in particular, that any relativistic Markov chain describing spacetime phenomena must be essentially Gaussian. Here, we introduce a simple relativistic extension of the Weierstrass random walk and show that there must exist a transition time t{c} delimiting two qualitative distinct dynamical regimes: the (nonrelativistic) superdiffusive Lévy flights, for t
Ciamarra, Massimo Pica; Coniglio, Antonio
2008-09-19
We measure the number Omega(phi) of mechanically stable states of volume fraction phi of a granular assembly under gravity. The granular entropy S(phi)=logOmega(phi) vanishes both at high density, at phi approximately equal to phi_rcp, and a low density, at phi approximately equal to phi_rvlp, where phi_rvlp is a new lower bound we call random very loose pack. phi_rlp is the volume fraction where the entropy is maximal. These findings allow for a clear explanation of compaction experiments and provide the first first-principle definition of the random loose volume fraction. In the context of the statistical mechanics approach to static granular materials, states with phi
Talley, Nicholas J.; Locke, G. Richard; Saito, Yuri A.; Almazar, Ann E.; Bouras, Ernest P.; Howden, Colin W.; Lacy, Brian E.; DiBaise, John K.; Prather, Charlene M.; Abraham, Bincy P.; El-Serag, Hashem B.; Moayyedi, Paul; Herrick, Linda M.; Szarka, Lawrence A.; Camilleri, Michael; Hamilton, Frank A.; Schleck, Cathy D.; Tilkes, Katherine E.; Zinsmeister, Alan R.
2015-01-01
Background & Aims Anti-depressants are frequently prescribed to treat functional dyspepsia (FD), a common disorder characterized by upper abdominal symptoms, including discomfort or post-prandial fullness. However, there is little evidence for the efficacy of these drugs in patients with FD. We performed a randomized, double-blind, placebo-controlled trial to evaluate the effects of anti-depressant therapy effects on symptoms, gastric emptying (GE), and mealinduced satiety in patients with FD. Methods We performed a study at 8 North American sites of patients who met the Rome II criteria for FD and did not have depression or use anti-depressants. Subjects (n=292; 44±15 y old, 75% female, 70% with dysmotility-like FD, and 30% with ulcer-like FD) were randomly assigned to groups given placebo, 50 mg amitriptyline, or 10 mg escitalopram for 10 weeks. The primary endpoint was adequate relief of FD symptoms for ≥5 weeks of the last 10 weeks (out of 12). Secondary endpoints included GE time, maximum tolerated volume in a nutrient drink test, and FD-related quality of life. Results An adequate relief response was reported by 39 subjects given placebo (40%), 51 given amitriptyline (53%), and 37 given escitalopram (38%) (P=.05, following treatment, adjusted for baseline balancing factors including all subjects). Subjects with ulcer-like FD given amitriptyline were more than 3-fold more likely to report adequate relief than those given placebo (odds ratio=3.1; 95% confidence interval, 1.1–9.0). Neither amitriptyline nor escitalopram appeared to affect GE or meal-induced satiety after the 10 week period in any group. Subjects with delayed GE were less likely to report adequate relief than subjects with normal GE (odds ratio=0.4; 95% confidence interval, 0.2–0.8). Both anti-depressants improved overall quality-of-life. Conclusions Amitriptyline, but not escitalopram, appears to benefit some patients with FD— particularly those with ulcer-like (painful) FD. Patients
A random number generator for continuous random variables
NASA Technical Reports Server (NTRS)
Guerra, V. M.; Tapia, R. A.; Thompson, J. R.
1972-01-01
A FORTRAN 4 routine is given which may be used to generate random observations of a continuous real valued random variable. Normal distribution of F(x), X, E(akimas), and E(linear) is presented in tabular form.
The RANDOM computer program: A linear congruential random number generator
NASA Technical Reports Server (NTRS)
Miles, R. F., Jr.
1986-01-01
The RANDOM Computer Program is a FORTRAN program for generating random number sequences and testing linear congruential random number generators (LCGs). The linear congruential form of random number generator is discussed, and the selection of parameters of an LCG for a microcomputer described. This document describes the following: (1) The RANDOM Computer Program; (2) RANDOM.MOD, the computer code needed to implement an LCG in a FORTRAN program; and (3) The RANCYCLE and the ARITH Computer Programs that provide computational assistance in the selection of parameters for an LCG. The RANDOM, RANCYCLE, and ARITH Computer Programs are written in Microsoft FORTRAN for the IBM PC microcomputer and its compatibles. With only minor modifications, the RANDOM Computer Program and its LCG can be run on most micromputers or mainframe computers.
Randomizing Roaches: Exploring the "Bugs" of Randomization in Experimental Design
ERIC Educational Resources Information Center
Wagler, Amy; Wagler, Ron
2014-01-01
Understanding the roles of random selection and random assignment in experimental design is a central learning objective in most introductory statistics courses. This article describes an activity, appropriate for a high school or introductory statistics course, designed to teach the concepts, values and pitfalls of random selection and assignment…
Random numbers from vacuum fluctuations
NASA Astrophysics Data System (ADS)
Shi, Yicheng; Chng, Brenda; Kurtsiefer, Christian
2016-07-01
We implement a quantum random number generator based on a balanced homodyne measurement of vacuum fluctuations of the electromagnetic field. The digitized signal is directly processed with a fast randomness extraction scheme based on a linear feedback shift register. The random bit stream is continuously read in a computer at a rate of about 480 Mbit/s and passes an extended test suite for random numbers.
Cluster Randomized Controlled Trial
Young, John; Chapman, Katie; Nixon, Jane; Patel, Anita; Holloway, Ivana; Mellish, Kirste; Anwar, Shamaila; Breen, Rachel; Knapp, Martin; Murray, Jenni; Farrin, Amanda
2015-01-01
Background and Purpose— We developed a new postdischarge system of care comprising a structured assessment covering longer-term problems experienced by patients with stroke and their carers, linked to evidence-based treatment algorithms and reference guides (the longer-term stroke care system of care) to address the poor longer-term recovery experienced by many patients with stroke. Methods— A pragmatic, multicentre, cluster randomized controlled trial of this system of care. Eligible patients referred to community-based Stroke Care Coordinators were randomized to receive the new system of care or usual practice. The primary outcome was improved patient psychological well-being (General Health Questionnaire-12) at 6 months; secondary outcomes included functional outcomes for patients, carer outcomes, and cost-effectiveness. Follow-up was through self-completed postal questionnaires at 6 and 12 months. Results— Thirty-two stroke services were randomized (29 participated); 800 patients (399 control; 401 intervention) and 208 carers (100 control; 108 intervention) were recruited. In intention to treat analysis, the adjusted difference in patient General Health Questionnaire-12 mean scores at 6 months was −0.6 points (95% confidence interval, −1.8 to 0.7; P=0.394) indicating no evidence of statistically significant difference between the groups. Costs of Stroke Care Coordinator inputs, total health and social care costs, and quality-adjusted life year gains at 6 months, 12 months, and over the year were similar between the groups. Conclusions— This robust trial demonstrated no benefit in clinical or cost-effectiveness outcomes associated with the new system of care compared with usual Stroke Care Coordinator practice. Clinical Trial Registration— URL: http://www.controlled-trials.com. Unique identifier: ISRCTN 67932305. PMID:26152298
Random recursive trees and the elephant random walk
NASA Astrophysics Data System (ADS)
Kürsten, Rüdiger
2016-03-01
One class of random walks with infinite memory, so-called elephant random walks, are simple models describing anomalous diffusion. We present a surprising connection between these models and bond percolation on random recursive trees. We use a coupling between the two models to translate results from elephant random walks to the percolation process. We calculate, besides other quantities, exact expressions for the first and the second moment of the root cluster size and of the number of nodes in child clusters of the first generation. We further introduce another model, the skew elephant random walk, and calculate the first and second moment of this process.
Randomly Hyperbranched Polymers
NASA Astrophysics Data System (ADS)
Konkolewicz, Dominik; Gilbert, Robert G.; Gray-Weale, Angus
2007-06-01
We describe a model for the structures of randomly hyperbranched polymers in solution, and find a logarithmic growth of radius with polymer mass. We include segmental overcrowding, which puts an upper limit on the density. The model is tested against simulations, against data on amylopectin, a major component of starch, on glycogen, and on polyglycerols. For samples of synthetic polyglycerol and glycogen, our model holds well for all the available data. The model reveals higher-level scaling structure in glycogen, related to the β particles seen in electron microscopy.
Mulet, R; Pagnani, A; Weigt, M; Zecchina, R
2002-12-23
We study the graph coloring problem over random graphs of finite average connectivity c. Given a number q of available colors, we find that graphs with low connectivity admit almost always a proper coloring, whereas graphs with high connectivity are uncolorable. Depending on q, we find the precise value of the critical average connectivity c(q). Moreover, we show that below c(q) there exists a clustering phase c in [c(d),c(q)] in which ground states spontaneously divide into an exponential number of clusters and where the proliferation of metastable states is responsible for the onset of complexity in local search algorithms. PMID:12484862
Can randomization be informative?
NASA Astrophysics Data System (ADS)
Pereira, Carlos A. B.; Campos, Thiago F.; Silva, Gustavo M.; Wechsler, Sergio
2012-10-01
In this paper the Pair of Siblings Paradox introduced by Pereira [1] is extended by considering more than two children and more than one child observed for gender. We follow the same lines of Wechsler et al. [2] that generalizes the three prisoners' dilemma, introduced by Gardner [3]. This paper's conjecture is that the Pair of Siblings and the Three Prisoners dilemma are dual paradoxes. Looking at possible likelihoods, the sure (randomized) selection for the former is non informative (informative), the opposite that holds for the latter. This situation is maintained for generalizations. Non informative likelihood here means that prior and posterior are equal.
Randomized Response Analysis in Mplus
ERIC Educational Resources Information Center
Hox, Joop; Lensvelt-Mulders, Gerty
2004-01-01
This article describes a technique to analyze randomized response data using available structural equation modeling (SEM) software. The randomized response technique was developed to obtain estimates that are more valid when studying sensitive topics. The basic feature of all randomized response methods is that the data are deliberately…
Random Numbers and Quantum Computers
ERIC Educational Resources Information Center
McCartney, Mark; Glass, David
2002-01-01
The topic of random numbers is investigated in such a way as to illustrate links between mathematics, physics and computer science. First, the generation of random numbers by a classical computer using the linear congruential generator and logistic map is considered. It is noted that these procedures yield only pseudo-random numbers since…
Investigating the Randomness of Numbers
ERIC Educational Resources Information Center
Pendleton, Kenn L.
2009-01-01
The use of random numbers is pervasive in today's world. Random numbers have practical applications in such far-flung arenas as computer simulations, cryptography, gambling, the legal system, statistical sampling, and even the war on terrorism. Evaluating the randomness of extremely large samples is a complex, intricate process. However, the…
Random Selection for Drug Screening
Center for Human Reliability Studies
2007-05-01
Simple random sampling is generally the starting point for a random sampling process. This sampling technique ensures that each individual within a group (population) has an equal chance of being selected. There are a variety of ways to implement random sampling in a practical situation.
Atomoxetine for Orthostatic Hypotension in an Elderly Patient Over 10 Weeks: A Case Report.
Hale, Genevieve M; Brenner, Michael
2015-09-01
Several nonpharmacologic strategies for orthostatic hypotension exist including avoiding large carbohydrate-rich meals; limiting alcohol consumption; maintaining adequate hydration; adding salt to foods; and using compression stockings, tilt-table exercises, or abdominal binders. If these fail, however, only limited evidence-based pharmacologic treatment options are available including the use of fludrocortisone, midodrine, pyridostigmine, and droxidopa as well as pseudoephedrine, ocetreotide, and atomoxetine. This report discusses a case of atomoxetine use for 10 weeks in an elderly patient with primary orthostatic hypotension. An 84-year-old man with long-standing primary orthostatic hypotension presented to our ambulatory cardiology pharmacotherapy clinic after several unsuccessful pharmacologic therapies including fludrocortisone, midodrine, and pyridostigmine. Nonpharmacologic strategies were also implemented. Atomoxetine was initiated, and the patient showed gradual improvements in symptoms and blood pressure control over the course of 10 weeks. Our data suggest that low-dose atomoxetine is an effective and safe agent for symptom improvement and blood pressure control in elderly patients with primary orthostatic hypotension. PMID:26406777
HP-PRRSV challenge of 4 and 10-week-old pigs
Technology Transfer Automated Retrieval System (TEKTRAN)
In 2006 a unique syndrome was recognized in growing pigs in China with the predominant clinical signs being high fever, anorexia, listlessness, red discoloration of skin, and respiratory distress. The disease had a very high morbidity and mortality rate and became known as porcine high fever disease...
NASA Astrophysics Data System (ADS)
Mak, Chi H.; Pham, Phuong; Afif, Samir A.; Goodman, Myron F.
2015-09-01
Enzymes that rely on random walk to search for substrate targets in a heterogeneously dispersed medium can leave behind complex spatial profiles of their catalyzed conversions. The catalytic signatures of these random-walk enzymes are the result of two coupled stochastic processes: scanning and catalysis. Here we develop analytical models to understand the conversion profiles produced by these enzymes, comparing an intrusive model, in which scanning and catalysis are tightly coupled, against a loosely coupled passive model. Diagrammatic theory and path-integral solutions of these models revealed clearly distinct predictions. Comparison to experimental data from catalyzed deaminations deposited on single-stranded DNA by the enzyme activation-induced deoxycytidine deaminase (AID) demonstrates that catalysis and diffusion are strongly intertwined, where the chemical conversions give rise to new stochastic trajectories that were absent if the substrate DNA was homogeneous. The C →U deamination profiles in both analytical predictions and experiments exhibit a strong contextual dependence, where the conversion rate of each target site is strongly contingent on the identities of other surrounding targets, with the intrusive model showing an excellent fit to the data. These methods can be applied to deduce sequence-dependent catalytic signatures of other DNA modification enzymes, with potential applications to cancer, gene regulation, and epigenetics.
NASA Astrophysics Data System (ADS)
Ben-Naim, E.; Hengartner, N. W.; Redner, S.; Vazquez, F.
2013-05-01
We study the effects of randomness on competitions based on an elementary random process in which there is a finite probability that a weaker team upsets a stronger team. We apply this model to sports leagues and sports tournaments, and compare the theoretical results with empirical data. Our model shows that single-elimination tournaments are efficient but unfair: the number of games is proportional to the number of teams N, but the probability that the weakest team wins decays only algebraically with N. In contrast, leagues, where every team plays every other team, are fair but inefficient: the top √{N} of teams remain in contention for the championship, while the probability that the weakest team becomes champion is exponentially small. We also propose a gradual elimination schedule that consists of a preliminary round and a championship round. Initially, teams play a small number of preliminary games, and subsequently, a few teams qualify for the championship round. This algorithm is fair and efficient: the best team wins with a high probability and the number of games scales as N 9/5, whereas traditional leagues require N 3 games to fairly determine a champion.
Random rough surface photofabrication
NASA Astrophysics Data System (ADS)
Brissonneau, Vincent; Escoubas, Ludovic; Flory, François; Berginc, Gérard
2011-10-01
Random rough surfaces are of primary interest for their optical properties: reducing reflection at the interface or obtaining specific scattering diagram for example. Thus controlling surface statistics during the fabrication process paves the way to original and specific behaviors of reflected optical waves. We detail an experimental method allowing the fabrication of random rough surfaces showing tuned statistical properties. A two-step photoresist exposure process was developed. In order to initiate photoresist polymerization, an energy threshold needs to be reached by light exposure. This energy is brought by a uniform exposure equipment comprising UV-LEDs. This pre-exposure is studied by varying parameters such as optical power and exposure time. The second step consists in an exposure based on the Gray method.1 The speckle pattern of an enlarged scattered laser beam is used to insolate the photoresist. A specific photofabrication bench using an argon ion laser was implemented. Parameters such as exposure time and distances between optical components are discussed. Then, we describe how we modify the speckle-based exposure bench to include a spatial light modulator (SLM). The SLM used is a micromirror matrix known as Digital Micromirror Device (DMD) which allows spatial modulation by displaying binary images. Thus, the spatial beam shape can be tuned and so the speckle pattern on the photoresist is modified. As the photoresist photofabricated surface is correlated to the speckle pattern used to insolate, the roughness parameters can be adjusted.
Mak, Chi H.; Pham, Phuong; Afif, Samir A.; Goodman, Myron F.
2015-01-01
Enzymes that rely on random walk to search for substrate targets in a heterogeneously dispersed medium can leave behind complex spatial profiles of their catalyzed conversions. The catalytic signatures of these random-walk enzymes are the result of two coupled stochastic processes: scanning and catalysis. Here we develop analytical models to understand the conversion profiles produced by these enzymes, comparing an intrusive model, in which scanning and catalysis are tightly coupled, against a loosely coupled passive model. Diagrammatic theory and path-integral solutions of these models revealed clearly distinct predictions. Comparison to experimental data from catalyzed deaminations deposited on single-stranded DNA by the enzyme activation-induced deoxycytidine deaminase (AID) demonstrates that catalysis and diffusion are strongly intertwined, where the chemical conversions give rise to new stochastic trajectories that were absent if the substrate DNA was homogeneous. The C → U deamination profiles in both analytical predictions and experiments exhibit a strong contextual dependence, where the conversion rate of each target site is strongly contingent on the identities of other surrounding targets, with the intrusive model showing an excellent fit to the data. These methods can be applied to deduce sequence-dependent catalytic signatures of other DNA modification enzymes, with potential applications to cancer, gene regulation, and epigenetics. PMID:26465508
Mosnaim, Giselle; Li, Hong; Martin, Molly; Richardson, DeJuran; Belice, Paula Jo; Avery, Elizabeth; Ryan, Norman; Bender, Bruce; Powell, Lynda
2013-01-01
Background Poor adherence to inhaled corticosteroids (ICS) is a critical risk factor contributing to asthma morbidity among low-income minority adolescents. Objective This trial tested whether peer support group meetings and peer asthma messages delivered via mp3 players improved adherence to ICS. Methods Low-income African American and/or Hispanic adolescents, ages 11–16, with persistent asthma, and poor (≤ 48%) adherence to prescription ICS during the 3-week run-in were randomized to intervention or attention control groups (ATG) for the 10-week treatment. During treatment, the intervention arm participated in weekly coping peer group support sessions and received mp3 peer-recorded asthma messages promoting adherence. The ATG participated in weekly meetings with a research assistant and received an equivalent number of mp3 doctor-recorded asthma messages. Adherence was measured using self-report and the DoserCT, (Meditrac, Inc.), an electronic dose counter. The primary outcome was the difference in adherence at 10 weeks between the two arms. Results Thirty-four subjects were randomized to each arm. At 10 weeks, no statistical difference in objectively measured adherence could be detected between the two arms adjusting for baseline adherence (P = 0.929). Adherence declined in both groups over the course of the active treatment period. Participants’ in both study arms self-reported adherence was significantly higher than their objectively measured adherence at week 10 (P < 0.0001). Conclusion Improving medication adherence in longitudinal studies is challenging. Peer support and mp3-delivered peer asthma messages may not be of sufficient dose to improve outcomes. PMID:24565620
Reidys, C.M.
1996-06-01
A mapping in random-structures is defined on the vertices of a generalized hypercube Q{sub {alpha}}{sup n}. A random-structure will consist of (1) a random contact graph and (2) a family of relations imposed on adjacent vertices. The vertex set of a random contact graph will be the set of all coordinates of a vertex P {element_of} Q{sub {alpha}}{sup n}. Its edge will be the union of the edge sets of two random graphs. The first is a random 1-regular graph on 2m vertices (coordinates) and the second is a random graph G{sub p} with p = c{sub 2}/n on all n vertices (coordinates). The structure of the random contact graphs will be investigated and it will be shown that for certain values of m, c{sub 2} the mapping in random-structures allows to search by the set of random-structures. This is applied to mappings in RNA-secondary structures. Also, the results on random-structures might be helpful for designing 3D-folding algorithms for RNA.
Reinelt, Douglas A.; van Swol, Frank B.; Kraynik, Andrew Michael
2004-06-01
The Surface Evolver was used to compute the equilibrium microstructure of dry soap foams with random structure and a wide range of cell-size distributions. Topological and geometric properties of foams and individual cells were evaluated. The theory for isotropic Plateau polyhedra describes the dependence of cell geometric properties on their volume and number of faces. The surface area of all cells is about 10% greater than a sphere of equal volume; this leads to a simple but accurate theory for the surface free energy density of foam. A novel parameter based on the surface-volume mean bubble radius R32 is used to characterize foam polydispersity. The foam energy, total cell edge length, and average number of faces per cell all decrease with increasing polydispersity. Pentagonal faces are the most common in monodisperse foam but quadrilaterals take over in highly polydisperse structures.
Accelerated randomized benchmarking
NASA Astrophysics Data System (ADS)
Granade, Christopher; Ferrie, Christopher; Cory, D. G.
2015-01-01
Quantum information processing offers promising advances for a wide range of fields and applications, provided that we can efficiently assess the performance of the control applied in candidate systems. That is, we must be able to determine whether we have implemented a desired gate, and refine accordingly. Randomized benchmarking reduces the difficulty of this task by exploiting symmetries in quantum operations. Here, we bound the resources required for benchmarking and show that, with prior information, we can achieve several orders of magnitude better accuracy than in traditional approaches to benchmarking. Moreover, by building on state-of-the-art classical algorithms, we reach these accuracies with near-optimal resources. Our approach requires an order of magnitude less data to achieve the same accuracies and to provide online estimates of the errors in the reported fidelities. We also show that our approach is useful for physical devices by comparing to simulations.
How random are random numbers generated using photons?
NASA Astrophysics Data System (ADS)
Solis, Aldo; Angulo Martínez, Alí M.; Ramírez Alarcón, Roberto; Cruz Ramírez, Hector; U'Ren, Alfred B.; Hirsch, Jorge G.
2015-06-01
Randomness is fundamental in quantum theory, with many philosophical and practical implications. In this paper we discuss the concept of algorithmic randomness, which provides a quantitative method to assess the Borel normality of a given sequence of numbers, a necessary condition for it to be considered random. We use Borel normality as a tool to investigate the randomness of ten sequences of bits generated from the differences between detection times of photon pairs generated by spontaneous parametric downconversion. These sequences are shown to fulfil the randomness criteria without difficulties. As deviations from Borel normality for photon-generated random number sequences have been reported in previous work, a strategy to understand these diverging findings is outlined.
Seitsalo, Hubertus; Niemelä, Raija K; Marinescu-Gava, Magdalena; Vuotila, Tuija; Tjäderhane, Leo; Salo, Tuula
2007-01-01
Background Matrix metalloproteinases (MMPs) are proteolytic enzymes that may contribute to tissue destruction in Sjögren's syndrome (SS). Low-dose doxycycline (LDD) inhibits MMPs. We evaluated the efficacy of LDD for the subjective symptoms in primary SS patients. This was a randomized, double blind, placebo controlled cross-over study. 22 patients were randomly assigned to receive either 20 mg LDD or matching placebo twice a day for 10 weeks. The first medication period was followed by 10-week washout period, after which the patient received either LDD or placebo, depending on the first drug received, followed by the second washout period. Stimulated saliva flow rates and pH were measured before and after one and ten weeks of each medication and after washout periods. VAS scale was used to assess the effect of LDD and placebo on following six subjective symptoms: xerostomia; xerophtalmia; difficulty of swallowing; myalgia; arthralgia; and fatigue. The effect was evaluated for each medication and washout period separately. Results Overall, the effects of medications on subjective symptoms were minor. Wilcoxon test demonstrated increased fatigue with LDD during medication (p < 0.05). The differences may, however, reflect normal fluctuation of symptoms in SS patients. Conclusion LDD may not be useful in reducing the primary SS symptoms. PMID:18163919
Does Random Dispersion Help Survival?
NASA Astrophysics Data System (ADS)
Schinazi, Rinaldo B.
2015-04-01
Many species live in colonies that prosper for a while and then collapse. After the collapse the colony survivors disperse randomly and found new colonies that may or may not make it depending on the new environment they find. We use birth and death chains in random environments to model such a population and to argue that random dispersion is a superior strategy for survival.
A Random Variable Transformation Process.
ERIC Educational Resources Information Center
Scheuermann, Larry
1989-01-01
Provides a short BASIC program, RANVAR, which generates random variates for various theoretical probability distributions. The seven variates include: uniform, exponential, normal, binomial, Poisson, Pascal, and triangular. (MVL)
NASA Astrophysics Data System (ADS)
Padrino, Juan C.; Zhang, Duan Z.
2015-11-01
The ensemble phase averaging technique is applied to model mass transport in a porous medium. The porous material is idealized as an ensemble of random networks, where each network consists of a set of junction points representing the pores and tortuous channels connecting them. Inside a channel, fluid transport is assumed to be governed by the one-dimensional diffusion equation. Mass balance leads to an integro-differential equation for the pores mass density. Instead of attempting to solve this equation, and equivalent set of partial differential equations is derived whose solution is sought numerically. As a test problem, we consider the one-dimensional diffusion of a substance from one end to the other in a bounded domain. For a statistically homogeneous and isotropic material, results show that for relatively large times the pore mass density evolution from the new theory is significantly delayed in comparison with the solution from the classical diffusion equation. In the short-time case, when the solution evolves with time as if the domain were semi-infinite, numerical results indicate that the pore mass density becomes a function of the similarity variable xt- 1 / 4 rather than xt- 1 / 2 characteristic of classical diffusion. This result was verified analytically. Possible applications of this framework include flow in gas shales. Work supported by LDRD project of LANL.
Nonvolatile random access memory
NASA Technical Reports Server (NTRS)
Wu, Jiin-Chuan (Inventor); Stadler, Henry L. (Inventor); Katti, Romney R. (Inventor)
1994-01-01
A nonvolatile magnetic random access memory can be achieved by an array of magnet-Hall effect (M-H) elements. The storage function is realized with a rectangular thin-film ferromagnetic material having an in-plane, uniaxial anisotropy and inplane bipolar remanent magnetization states. The thin-film magnetic element is magnetized by a local applied field, whose direction is used to form either a 0 or 1 state. The element remains in the 0 or 1 state until a switching field is applied to change its state. The stored information is detcted by a Hall-effect sensor which senses the fringing field from the magnetic storage element. The circuit design for addressing each cell includes transistor switches for providing a current of selected polarity to store a binary digit through a separate conductor overlying the magnetic element of the cell. To read out a stored binary digit, transistor switches are employed to provide a current through a row of Hall-effect sensors connected in series and enabling a differential voltage amplifier connected to all Hall-effect sensors of a column in series. To avoid read-out voltage errors due to shunt currents through resistive loads of the Hall-effect sensors of other cells in the same column, at least one transistor switch is provided between every pair of adjacent cells in every row which are not turned on except in the row of the selected cell.
Ferroelectric random access memories.
Ishiwara, Hiroshi
2012-10-01
Ferroelectric random access memory (FeRAM) is a nonvolatile memory, in which data are stored using hysteretic P-E (polarization vs. electric field) characteristics in a ferroelectric film. In this review, history and characteristics of FeRAMs are first introduced. It is described that there are two types of FeRAMs, capacitor-type and FET-type, and that only the capacitor-type FeRAM is now commercially available. In chapter 2, properties of ferroelectric films are discussed from a viewpoint of FeRAM application, in which particular attention is paid to those of Pb(Zr,Ti)O3, SrBi2Ta2O9, and BiFeO3. Then, cell structures and operation principle of the capacitor-type FeRAMs are discussed in chapter 3. It is described that the stacked technology of ferroelectric capacitors and development of new materials with large remanent polarization are important for fabricating high-density memories. Finally, in chapter 4, the optimized gate structure in ferroelectric-gate field-effect transistors is discussed and experimental results showing excellent data retention characteristics are presented. PMID:23421123
Randomized parcellation based inference.
Da Mota, Benoit; Fritsch, Virgile; Varoquaux, Gaël; Banaschewski, Tobias; Barker, Gareth J; Bokde, Arun L W; Bromberg, Uli; Conrod, Patricia; Gallinat, Jürgen; Garavan, Hugh; Martinot, Jean-Luc; Nees, Frauke; Paus, Tomas; Pausova, Zdenka; Rietschel, Marcella; Smolka, Michael N; Ströhle, Andreas; Frouin, Vincent; Poline, Jean-Baptiste; Thirion, Bertrand
2014-04-01
Neuroimaging group analyses are used to relate inter-subject signal differences observed in brain imaging with behavioral or genetic variables and to assess risks factors of brain diseases. The lack of stability and of sensitivity of current voxel-based analysis schemes may however lead to non-reproducible results. We introduce a new approach to overcome the limitations of standard methods, in which active voxels are detected according to a consensus on several random parcellations of the brain images, while a permutation test controls the false positive risk. Both on synthetic and real data, this approach shows higher sensitivity, better accuracy and higher reproducibility than state-of-the-art methods. In a neuroimaging-genetic application, we find that it succeeds in detecting a significant association between a genetic variant next to the COMT gene and the BOLD signal in the left thalamus for a functional Magnetic Resonance Imaging contrast associated with incorrect responses of the subjects from a Stop Signal Task protocol. PMID:24262376
Phase transitions on random lattices: how random is topological disorder?
Barghathi, Hatem; Vojta, Thomas
2014-09-19
We study the effects of topological (connectivity) disorder on phase transitions. We identify a broad class of random lattices whose disorder fluctuations decay much faster with increasing length scale than those of generic random systems, yielding a wandering exponent of ω=(d-1)/(2d) in d dimensions. The stability of clean critical points is thus governed by the criterion (d+1)ν>2 rather than the usual Harris criterion dν>2, making topological disorder less relevant than generic randomness. The Imry-Ma criterion is also modified, allowing first-order transitions to survive in all dimensions d>1. These results explain a host of puzzling violations of the original criteria for equilibrium and nonequilibrium phase transitions on random lattices. We discuss applications, and we illustrate our theory by computer simulations of random Voronoi and other lattices. PMID:25279615
Random distributed feedback fibre lasers
NASA Astrophysics Data System (ADS)
Turitsyn, Sergei K.; Babin, Sergey A.; Churkin, Dmitry V.; Vatnik, Ilya D.; Nikulin, Maxim; Podivilov, Evgenii V.
2014-09-01
The concept of random lasers exploiting multiple scattering of photons in an amplifying disordered medium in order to generate coherent light without a traditional laser resonator has attracted a great deal of attention in recent years. This research area lies at the interface of the fundamental theory of disordered systems and laser science. The idea was originally proposed in the context of astrophysics in the 1960s by V.S. Letokhov, who studied scattering with “negative absorption” of the interstellar molecular clouds. Research on random lasers has since developed into a mature experimental and theoretical field. A simple design of such lasers would be promising for potential applications. However, in traditional random lasers the properties of the output radiation are typically characterized by complex features in the spatial, spectral and time domains, making them less attractive than standard laser systems in terms of practical applications. Recently, an interesting and novel type of one-dimensional random laser that operates in a conventional telecommunication fibre without any pre-designed resonator mirrors-random distributed feedback fibre laser-was demonstrated. The positive feedback required for laser generation in random fibre lasers is provided by the Rayleigh scattering from the inhomogeneities of the refractive index that are naturally present in silica glass. In the proposed laser concept, the randomly backscattered light is amplified through the Raman effect, providing distributed gain over distances up to 100 km. Although an effective reflection due to the Rayleigh scattering is extremely small (˜0.1%), the lasing threshold may be exceeded when a sufficiently large distributed Raman gain is provided. Such a random distributed feedback fibre laser has a number of interesting and attractive features. The fibre waveguide geometry provides transverse confinement, and effectively one-dimensional random distributed feedback leads to the generation
Students' Misconceptions about Random Variables
ERIC Educational Resources Information Center
Kachapova, Farida; Kachapov, Ilias
2012-01-01
This article describes some misconceptions about random variables and related counter-examples, and makes suggestions about teaching initial topics on random variables in general form instead of doing it separately for discrete and continuous cases. The focus is on post-calculus probability courses. (Contains 2 figures.)
NASA Astrophysics Data System (ADS)
Jung, P.; Talkner, P.
2010-09-01
A simple way to convert a purely random sequence of events into a signal with a strong periodic component is proposed. The signal consists of those instants of time at which the length of the random sequence exceeds an integer multiple of a given number. The larger this number the more pronounced the periodic behavior becomes.
Computer generation of random deviates.
Cormack, J; Shuter, B
1991-06-01
The need for random deviates arises in many scientific applications, such as the simulation of physical processes, numerical evaluation of complex mathematical formulae and the modeling of decision processes. In medical physics, Monte Carlo simulations have been used in radiology, radiation therapy and nuclear medicine. Specific instances include the modelling of x-ray scattering processes and the addition of random noise to images or curves in order to assess the effects of various processing procedures. Reliable sources of random deviates with statistical properties indistinguishable from true random deviates are a fundamental necessity for such tasks. This paper provides a review of computer algorithms which can be used to generate uniform random deviates and other distributions of interest to medical physicists, along with a few caveats relating to various problems and pitfalls which can occur. Source code listings for the generators discussed (in FORTRAN, Turbo-PASCAL and Data General ASSEMBLER) are available on request from the authors. PMID:1747086
Randomness versus nonlocality and entanglement.
Acín, Antonio; Massar, Serge; Pironio, Stefano
2012-03-01
The outcomes obtained in Bell tests involving two-outcome measurements on two subsystems can, in principle, generate up to 2 bits of randomness. However, the maximal violation of the Clauser-Horne-Shimony-Holt inequality guarantees the generation of only 1.23 bits of randomness. We prove here that quantum correlations with arbitrarily little nonlocality and states with arbitrarily little entanglement can be used to certify that close to the maximum of 2 bits of randomness are produced. Our results show that nonlocality, entanglement, and randomness are inequivalent quantities. They also imply that device-independent quantum key distribution with an optimal key generation rate is possible by using almost-local correlations and that device-independent randomness generation with an optimal rate is possible with almost-local correlations and with almost-unentangled states. PMID:22463395
Dynamic response of random parametered structures with random excitation. [DYNAMO
Branstetter, L.J.; Paez, T.L.
1986-02-01
A Taylor series expansion technique is used for numerical evaluation of the statistical response moments of a linear multidegree of freedom (MDF) system having random stiffness characteristics, when excited by either stationary or nonstationary random load components. Equations are developed for the cases of white noise loading and single step memory loading, and a method is presented to extend the solution to multistep memory loading. The equations are greatly simplified by the assumption that all random quantities are normally distributed. A computer program is developed to calculate the response moments of example systems. A program user's manual and listing (DYNAMO) are included. Future extensions of the work and potential applications are discussed.
Rezaei, Farzin; Ghaderi, Ebrahim; Mardani, Roya; Hamidi, Seiran; Hassanzadeh, Kambiz
2016-06-01
To date, no medication has been approved as an effective treatment for methamphetamine dependence. Topiramate has attracted considerable attention as a treatment for the dependence on alcohol and stimulants. Therefore, this study aimed to evaluate the effect of topiramate for methamphetamine dependence. This study was a double-blind, randomized, placebo-controlled trial. In the present investigation, 62 methamphetamine-dependent adults were enrolled and randomized into two groups, and received topiramate or a placebo for 10 weeks in escalating doses from 50 mg/day to the target maintenance dose of 200 mg/day. Addiction severity index (ASI) and craving scores were registered every week. The Beck questionnaire was also given to each participant at baseline and every 2 weeks during the treatment. Urine samples were collected at baseline and every 2 weeks during the treatment. Fifty-seven patients completed 10 weeks of the trial. There was no significant difference between both groups in the mean percentage of prescribed capsules taken by the participants. At week six, the topiramate group showed a significantly lower proportion of methamphetamine-positive urine tests in comparison with the placebo group (P = 0.01). In addition, there were significantly lower scores in the topiramate group in comparison with the placebo group in two domains of ASI: drug use severity (P < 0.001) and drug need (P < 0.001). Furthermore, the craving score (duration) significantly declined in the topiramate patients compared to those receiving the placebo. In conclusion, the results of this trial suggest that topiramate may be beneficial for the treatment of methamphetamine dependence. PMID:26751259
Mental skills training with basic combat training soldiers: A group-randomized trial.
Adler, Amy B; Bliese, Paul D; Pickering, Michael A; Hammermeister, Jon; Williams, Jason; Harada, Coreen; Csoka, Louis; Holliday, Bernie; Ohlson, Carl
2015-11-01
Cognitive skills training has been linked to greater skills, self-efficacy, and performance. Although research in a variety of organizational settings has demonstrated training efficacy, few studies have assessed cognitive skills training using rigorous, longitudinal, randomized trials with active controls. The present study examined cognitive skills training in a high-risk occupation by randomizing 48 platoons (N = 2,432 soldiers) in basic combat training to either (a) mental skills training or (b) an active comparison condition (military history). Surveys were conducted at baseline and 3 times across the 10-week course. Multilevel mixed-effects models revealed that soldiers in the mental skills training condition reported greater use of a range of cognitive skills and increased confidence relative to those in the control condition. Soldiers in the mental skills training condition also performed better on obstacle course events, rappelling, physical fitness, and initial weapons qualification scores, although effects were generally moderated by gender and previous experience. Overall, effects were small; however, given the rigor of the design, the findings clearly contribute to the broader literature by providing supporting evidence that cognitive training skills can enhance performance in occupational and sports settings. Future research should address gender and experience to determine the need for targeting such training appropriately. PMID:26011718
BE-ACTIV for Depression in Nursing Homes: Primary Outcomes of a Randomized Clinical Trial
Van Haitsma, Kimberly; Schoenbachler, Ben; Looney, Stephen W.
2015-01-01
Objectives. To report the primary outcomes of a cluster randomized clinical trial of Behavioral Activities Intervention (BE-ACTIV), a behavioral intervention for depression in nursing homes. Method. Twenty-three nursing homes randomized to BE-ACTIV or treatment as usual (TAU); 82 depressed long-term care residents recruited from these nursing homes. BE-ACTIV participants received 10 weeks of individual therapy after a 2-week baseline. TAU participants received weekly research visits. Follow-up assessments occurred at 3- and 6-month posttreatment. Results. BE-ACTIV group participants showed better diagnostic recovery at posttreatment in intent-to-treat analyses adjusted for clustering. They were more likely to be remitted than TAU participants at posttreatment and at 3-month posttreatment but not at 6 months. Self-reported depressive symptoms and functioning improved in both groups, but there were no significant treatment by time interactions in these variables. Discussion. BE-ACTIV was superior to TAU in moving residents to full remission from depression. The treatment was well received by nursing home staff and accepted by residents. A large proportion of participants remained symptomatic at posttreatment, despite taking one or more antidepressants. The results illustrate the potential power of an attentional intervention to improve self-reported mood and functioning, but also the difficulties related to both studying and implementing effective treatments in nursing homes. PMID:24691156
Jay, Kenneth; Brandt, Mikkel; Jakobsen, Markus Due; Sundstrup, Emil; Berthelsen, Kasper Gymoese; Schraefel, Mc; Sjøgaard, Gisela; Andersen, Lars L
2016-08-01
People with chronic musculoskeletal pain often experience pain-related fear of movement and avoidance behavior. The Fear-Avoidance model proposes a possible mechanism at least partly explaining the development and maintenance of chronic pain. People who interpret pain during movement as being potentially harmful to the organism may initiate a vicious behavioral cycle by generating pain-related fear of movement accompanied by avoidance behavior and hyper-vigilance.This study investigates whether an individually adapted multifactorial approach comprised of biopsychosocial elements, with a focus on physical exercise, mindfulness, and education on pain and behavior, can decrease work-related fear-avoidance beliefs.As part of a large scale 10-week worksite randomized controlled intervention trial focusing on company initiatives to combat work-related musculoskeletal pain and stress, we evaluated fear-avoidance behavior in 112 female laboratory technicians with chronic neck, shoulder, upper back, lower back, elbow, and hand/wrist pain using the Fear-Avoidance Beliefs Questionnaire at baseline, before group allocation, and again at the post intervention follow-up 10 weeks later.A significant group by time interaction was observed (P < 0.05) for work-related fear-avoidance beliefs. The between-group difference at follow-up was -2.2 (-4.0 to -0.5), corresponding to a small to medium effect size (Cohen's d = 0.30).Our study shows that work-related, but not leisure time activity-related, fear-avoidance beliefs, as assessed by the Fear-avoidance Beliefs Questionnaire, can be significantly reduced by 10 weeks of physical-cognitive-mindfulness training in female laboratory technicians with chronic pain. PMID:27559939
Armah, George; Lewis, Kristen D. C.; Cortese, Margaret M.; Parashar, Umesh D.; Ansah, Akosua; Gazley, Lauren; Victor, John C.; McNeal, Monica M.; Binka, Fred; Steele, A. Duncan
2016-01-01
Background. The recommended schedule for receipt of 2-dose human rotavirus vaccine (HRV) coincides with receipt of the first and second doses of diphtheria, pertussis, and tetanus vaccine (ie, 6 and 10 weeks of age, respectively). Alternative schedules and additional doses of HRV have been proposed and may improve vaccine performance in low-income countries. Methods. In this randomized trial in rural Ghana, HRV was administered at ages 6 and 10 weeks (group 1), 10 and 14 weeks (group 2), or 6, 10, and 14 weeks (group 3). We compared serum antirotavirus immunoglobulin A (IgA) seroconversion (≥20 U/mL) and geometric mean concentrations (GMCs) between group 1 and groups 2 and 3. Results. Ninety-three percent of participants (424 of 456) completed the study per protocol. In groups 1, 2, and 3, the IgA seroconversion frequencies among participants with IgA levels of <20 U/mL at baseline were 28.9%, 37.4%, and 43.4%, respectively (group 1 vs group 3, P = .014; group 1 vs group 2, P = .163). Postvaccination IgA GMCs were 22.1 U/mL, 26.5 U/mL, and 32.6 U/mL in groups 1, 2, and 3, respectively (group 1 vs group 3, P = .038; group 1 vs group 2, P = .304). Conclusions. A third dose of HRV resulted in increased seroconversion frequencies and GMCs, compared with 2 doses administered at 6 and 10 weeks of age. Since there is no correlate of protection, a postmarketing effectiveness study is required to determine whether the improvement in immune response translates into a public health benefit in low-income countries. Clinical Trials Registration. NCT015751. PMID:26823335
Lowest eigenvalues of random Hamiltonians
Shen, J. J.; Zhao, Y. M.; Arima, A.; Yoshinaga, N.
2008-05-15
In this article we study the lowest eigenvalues of random Hamiltonians for both fermion and boson systems. We show that an empirical formula of evaluating the lowest eigenvalues of random Hamiltonians in terms of energy centroids and widths of eigenvalues is applicable to many different systems. We improve the accuracy of the formula by considering the third central moment. We show that these formulas are applicable not only to the evaluation of the lowest energy but also to the evaluation of excited energies of systems under random two-body interactions.
Random graphs with hidden color.
Söderberg, Bo
2003-07-01
We propose and investigate a unifying class of sparse random graph models, based on a hidden coloring of edge-vertex incidences, extending an existing approach, random graphs with a given degree distribution, in a way that admits a nontrivial correlation structure in the resulting graphs. The approach unifies a number of existing random graph ensembles within a common general formalism, and allows for the analytic calculation of observable graph characteristics. In particular, generating function techniques are used to derive the size distribution of connected components (clusters) as well as the location of the percolation threshold where a giant component appears. PMID:12935185
Random sequential adsorption on fractals
NASA Astrophysics Data System (ADS)
Ciesla, Michal; Barbasz, Jakub
2012-07-01
Irreversible adsorption of spheres on flat collectors having dimension d < 2 is studied. Molecules are adsorbed on Sierpinski's triangle and carpet-like fractals (1 < d < 2), and on general Cantor set (d < 1). Adsorption process is modeled numerically using random sequential adsorption (RSA) algorithm. The paper concentrates on measurement of fundamental properties of coverages, i.e., maximal random coverage ratio and density autocorrelation function, as well as RSA kinetics. Obtained results allow to improve phenomenological relation between maximal random coverage ratio and collector dimension. Moreover, simulations show that, in general, most of known dimensional properties of adsorbed monolayers are valid for non-integer dimensions.
Random sequential adsorption on fractals.
Ciesla, Michal; Barbasz, Jakub
2012-07-28
Irreversible adsorption of spheres on flat collectors having dimension d < 2 is studied. Molecules are adsorbed on Sierpinski's triangle and carpet-like fractals (1 < d < 2), and on general Cantor set (d < 1). Adsorption process is modeled numerically using random sequential adsorption (RSA) algorithm. The paper concentrates on measurement of fundamental properties of coverages, i.e., maximal random coverage ratio and density autocorrelation function, as well as RSA kinetics. Obtained results allow to improve phenomenological relation between maximal random coverage ratio and collector dimension. Moreover, simulations show that, in general, most of known dimensional properties of adsorbed monolayers are valid for non-integer dimensions. PMID:22852643
Covariate-based constrained randomization of group-randomized trials.
Moulton, Lawrence H
2004-01-01
Group-randomized study designs are useful when individually randomized designs are either not possible, or will not be able to estimate the parameters of interest. Blocked and/or stratified (for example, pair-matched) designs have been used, and their properties statistically evaluated by many researchers. Group-randomized trials often have small numbers of experimental units, and strong, geographically induced between-unit correlation, which increase the chance of obtaining a "bad" randomization outcome. This article describes a procedure--random selection from a list of acceptable allocations--to allocate treatment conditions in a way that ensures balance on relevant covariates. Numerous individual- and group-level covariates can be balanced using exact or caliper criteria. Simulation results indicate that this method has good frequency properties, but some care may be needed not to overly constrain the randomization. There is a trade-off between achieving good balance through a highly constrained design, and jeopardizing the appearance of impartiality of the investigator and potentially departing from the nominal Type I error. PMID:16279255
Quantifying randomness in real networks
Orsini, Chiara; Dankulov, Marija M.; Colomer-de-Simón, Pol; Jamakovic, Almerima; Mahadevan, Priya; Vahdat, Amin; Bassler, Kevin E.; Toroczkai, Zoltán; Boguñá, Marián; Caldarelli, Guido; Fortunato, Santo; Krioukov, Dmitri
2015-01-01
Represented as graphs, real networks are intricate combinations of order and disorder. Fixing some of the structural properties of network models to their values observed in real networks, many other properties appear as statistical consequences of these fixed observables, plus randomness in other respects. Here we employ the dk-series, a complete set of basic characteristics of the network structure, to study the statistical dependencies between different network properties. We consider six real networks—the Internet, US airport network, human protein interactions, technosocial web of trust, English word network, and an fMRI map of the human brain—and find that many important local and global structural properties of these networks are closely reproduced by dk-random graphs whose degree distributions, degree correlations and clustering are as in the corresponding real network. We discuss important conceptual, methodological, and practical implications of this evaluation of network randomness, and release software to generate dk-random graphs. PMID:26482121
Scattering from a random surface
Abarbanel, H.D.I.
1980-11-01
We give a formulation of the problem of propagation of scalar waves over a random surface. By a judicious choice of variables we are able to show that this situation is equivalent to propagation of these waves through a medium of random fluctuations with fluctuating source and receiver. The wave equation in the new coordinates has an additional term, the fluctuation operator, which depends on derivatives of the surface in space and time. An expansion in the fluctuation operator is given which guarantees the desired boundary conditions at every order. We treat both the cases where the surface is time dependent, such as the sea surface, or fixed in time. Also discussed is the situation where the source and receiver lie between the random surface and another, possibly also random, surface. In detail we consider acoustic waves for which the surfaces are pressure release. The method is directly applicable to electromagnetic waves and other boundary conditions.
A Randomized Central Limit Theorem
NASA Astrophysics Data System (ADS)
Eliazar, Iddo; Klafter, Joseph
2010-05-01
The Central Limit Theorem (CLT), one of the most elemental pillars of Probability Theory and Statistical Physics, asserts that: the universal probability law of large aggregates of independent and identically distributed random summands with zero mean and finite variance, scaled by the square root of the aggregate-size (√{n}), is Gaussian. The scaling scheme of the CLT is deterministic and uniform - scaling all aggregate-summands by the common and deterministic factor √{n}. This Letter considers scaling schemes which are stochastic and non-uniform, and presents a "Randomized Central Limit Theorem" (RCLT): we establish a class of random scaling schemes which yields universal probability laws of large aggregates of independent and identically distributed random summands. The RCLT universal probability laws, in turn, are the one-sided and the symmetric Lévy laws.
Quantifying randomness in real networks
NASA Astrophysics Data System (ADS)
Orsini, Chiara; Dankulov, Marija M.; Colomer-de-Simón, Pol; Jamakovic, Almerima; Mahadevan, Priya; Vahdat, Amin; Bassler, Kevin E.; Toroczkai, Zoltán; Boguñá, Marián; Caldarelli, Guido; Fortunato, Santo; Krioukov, Dmitri
2015-10-01
Represented as graphs, real networks are intricate combinations of order and disorder. Fixing some of the structural properties of network models to their values observed in real networks, many other properties appear as statistical consequences of these fixed observables, plus randomness in other respects. Here we employ the dk-series, a complete set of basic characteristics of the network structure, to study the statistical dependencies between different network properties. We consider six real networks--the Internet, US airport network, human protein interactions, technosocial web of trust, English word network, and an fMRI map of the human brain--and find that many important local and global structural properties of these networks are closely reproduced by dk-random graphs whose degree distributions, degree correlations and clustering are as in the corresponding real network. We discuss important conceptual, methodological, and practical implications of this evaluation of network randomness, and release software to generate dk-random graphs.
Quantifying randomness in real networks.
Orsini, Chiara; Dankulov, Marija M; Colomer-de-Simón, Pol; Jamakovic, Almerima; Mahadevan, Priya; Vahdat, Amin; Bassler, Kevin E; Toroczkai, Zoltán; Boguñá, Marián; Caldarelli, Guido; Fortunato, Santo; Krioukov, Dmitri
2015-01-01
Represented as graphs, real networks are intricate combinations of order and disorder. Fixing some of the structural properties of network models to their values observed in real networks, many other properties appear as statistical consequences of these fixed observables, plus randomness in other respects. Here we employ the dk-series, a complete set of basic characteristics of the network structure, to study the statistical dependencies between different network properties. We consider six real networks--the Internet, US airport network, human protein interactions, technosocial web of trust, English word network, and an fMRI map of the human brain--and find that many important local and global structural properties of these networks are closely reproduced by dk-random graphs whose degree distributions, degree correlations and clustering are as in the corresponding real network. We discuss important conceptual, methodological, and practical implications of this evaluation of network randomness, and release software to generate dk-random graphs. PMID:26482121
Quantum-noise randomized ciphers
NASA Astrophysics Data System (ADS)
Nair, Ranjith; Yuen, Horace P.; Corndorf, Eric; Eguchi, Takami; Kumar, Prem
2006-11-01
We review the notion of a classical random cipher and its advantages. We sharpen the usual description of random ciphers to a particular mathematical characterization suggested by the salient feature responsible for their increased security. We describe a concrete system known as αη and show that it is equivalent to a random cipher in which the required randomization is affected by coherent-state quantum noise. We describe the currently known security features of αη and similar systems, including lower bounds on the unicity distances against ciphertext-only and known-plaintext attacks. We show how αη used in conjunction with any standard stream cipher such as the Advanced Encryption Standard provides an additional, qualitatively different layer of security from physical encryption against known-plaintext attacks on the key. We refute some claims in the literature that αη is equivalent to a nonrandom stream cipher.
Quantum-noise randomized ciphers
Nair, Ranjith; Yuen, Horace P.; Kumar, Prem; Corndorf, Eric; Eguchi, Takami
2006-11-15
We review the notion of a classical random cipher and its advantages. We sharpen the usual description of random ciphers to a particular mathematical characterization suggested by the salient feature responsible for their increased security. We describe a concrete system known as {alpha}{eta} and show that it is equivalent to a random cipher in which the required randomization is affected by coherent-state quantum noise. We describe the currently known security features of {alpha}{eta} and similar systems, including lower bounds on the unicity distances against ciphertext-only and known-plaintext attacks. We show how {alpha}{eta} used in conjunction with any standard stream cipher such as the Advanced Encryption Standard provides an additional, qualitatively different layer of security from physical encryption against known-plaintext attacks on the key. We refute some claims in the literature that {alpha}{eta} is equivalent to a nonrandom stream cipher.
Random Selection for Drug Screening
Center for Human Reliability Studies
2007-05-01
Sampling is the process of choosing some members out of a group or population. Probablity sampling, or random sampling, is the process of selecting members by chance with a known probability of each individual being chosen.
Control theory for random systems
NASA Technical Reports Server (NTRS)
Bryson, A. E., Jr.
1972-01-01
A survey is presented of the current knowledge available for designing and predicting the effectiveness of controllers for dynamic systems which can be modeled by ordinary differential equations. A short discussion of feedback control is followed by a description of deterministic controller design and the concept of system state. The need for more realistic disturbance models led to the use of stochastic process concepts, in particular the Gauss-Markov process. A compensator controlled system, with random forcing functions, random errors in the measurements, and random initial conditions, is treated as constituting a Gauss-Markov random process; hence the mean-square behavior of the controlled system is readily predicted. As an example, a compensator is designed for a helicopter to maintain it in hover in a gusty wind over a point on the ground.
Diffraction by random Ronchi gratings.
Torcal-Milla, Francisco Jose; Sanchez-Brea, Luis Miguel
2016-08-01
In this work, we obtain analytical expressions for the near-and far-field diffraction of random Ronchi diffraction gratings where the slits of the grating are randomly displaced around their periodical positions. We theoretically show that the effect of randomness in the position of the slits of the grating produces a decrease of the contrast and even disappearance of the self-images for high randomness level at the near field. On the other hand, it cancels high-order harmonics in far field, resulting in only a few central diffraction orders. Numerical simulations by means of the Rayleigh-Sommerfeld diffraction formula are performed in order to corroborate the analytical results. These results are of interest for industrial and technological applications where manufacture errors need to be considered. PMID:27505363
Experimental evidence of quantum randomness incomputability
Calude, Cristian S.; Dinneen, Michael J.; Dumitrescu, Monica; Svozil, Karl
2010-08-15
In contrast with software-generated randomness (called pseudo-randomness), quantum randomness can be proven incomputable; that is, it is not exactly reproducible by any algorithm. We provide experimental evidence of incomputability--an asymptotic property--of quantum randomness by performing finite tests of randomness inspired by algorithmic information theory.
On Pfaffian Random Point Fields
NASA Astrophysics Data System (ADS)
Kargin, V.
2014-02-01
We study Pfaffian random point fields by using the Moore-Dyson quaternion determinants. First, we give sufficient conditions that ensure that a self-dual quaternion kernel defines a valid random point field, and then we prove a CLT for Pfaffian point fields. The proofs are based on a new quaternion extension of the Cauchy-Binet determinantal identity. In addition, we derive the Fredholm determinantal formulas for the Pfaffian point fields which use the quaternion determinant.
Digital random-number generator
NASA Technical Reports Server (NTRS)
Brocker, D. H.
1973-01-01
For binary digit array of N bits, use N noise sources to feed N nonlinear operators; each flip-flop in digit array is set by nonlinear operator to reflect whether amplitude of generator which feeds it is above or below mean value of generated noise. Fixed-point uniform distribution random number generation method can also be used to generate random numbers with other than uniform distribution.
Quasi-Random Sequence Generators.
1994-03-01
Version 00 LPTAU generates quasi-random sequences. The sequences are uniformly distributed sets of L=2**30 points in the N-dimensional unit cube: I**N=[0,1]. The sequences are used as nodes for multidimensional integration, as searching points in global optimization, as trial points in multicriteria decision making, as quasi-random points for quasi Monte Carlo algorithms.
Randomness and degrees of irregularity.
Pincus, S; Singer, B H
1996-01-01
The fundamental question "Are sequential data random?" arises in myriad contexts, often with severe data length constraints. Furthermore, there is frequently a critical need to delineate nonrandom sequences in terms of closeness to randomness--e.g., to evaluate the efficacy of therapy in medicine. We address both these issues from a computable framework via a quantification of regularity. ApEn (approximate entropy), defining maximal randomness for sequences of arbitrary length, indicating the applicability to sequences as short as N = 5 points. An infinite sequence formulation of randomness is introduced that retains the operational (and computable) features of the finite case. In the infinite sequence setting, we indicate how the "foundational" definition of independence in probability theory, and the definition of normality in number theory, reduce to limit theorems without rates of convergence, from which we utilize ApEn to address rates of convergence (of a deficit from maximal randomness), refining the aforementioned concepts in a computationally essential manner. Representative applications among many are indicated to assess (i) random number generation output; (ii) well-shuffled arrangements; and (iii) (the quality of) bootstrap replicates. PMID:11607637
Phase Transitions on Random Lattices: How Random is Topological Disorder?
NASA Astrophysics Data System (ADS)
Barghathi, Hatem; Vojta, Thomas
2015-03-01
We study the effects of topological (connectivity) disorder on phase transitions. We identify a broad class of random lattices whose disorder fluctuations decay much faster with increasing length scale than those of generic random systems, yielding a wandering exponent of ω = (d - 1) / (2 d) in d dimensions. The stability of clean critical points is thus governed by the criterion (d + 1) ν > 2 rather than the usual Harris criterion dν > 2 , making topological disorder less relevant than generic randomness. The Imry-Ma criterion is also modified, allowing first-order transitions to survive in all dimensions d > 1 . These results explain a host of puzzling violations of the original criteria for equilibrium and nonequilibrium phase transitions on random lattices. We discuss applications, and we illustrate our theory by computer simulations of random Voronoi and other lattices. This work was supported by the NSF under Grant Nos. DMR-1205803 and PHYS-1066293. We acknowledge the hospitality of the Aspen Center for Physics.
Full randomness from arbitrarily deterministic events.
Gallego, Rodrigo; Masanes, Lluis; De La Torre, Gonzalo; Dhara, Chirag; Aolita, Leandro; Acín, Antonio
2013-01-01
Do completely unpredictable events exist? Classical physics excludes fundamental randomness. Although quantum theory makes probabilistic predictions, this does not imply that nature is random, as randomness should be certified without relying on the complete structure of the theory being used. Bell tests approach the question from this perspective. However, they require prior perfect randomness, falling into a circular reasoning. A Bell test that generates perfect random bits from bits possessing high-but less than perfect-randomness has recently been obtained. Yet, the main question remained open: does any initial randomness suffice to certify perfect randomness? Here we show that this is indeed the case. We provide a Bell test that uses arbitrarily imperfect random bits to produce bits that are, under the non-signalling principle assumption, perfectly random. This provides the first protocol attaining full randomness amplification. Our results have strong implications onto the debate of whether there exist events that are fully random. PMID:24173040
On Convergent Probability of a Random Walk
ERIC Educational Resources Information Center
Lee, Y.-F.; Ching, W.-K.
2006-01-01
This note introduces an interesting random walk on a straight path with cards of random numbers. The method of recurrent relations is used to obtain the convergent probability of the random walk with different initial positions.
Wave propagation through a random medium - The random slab problem
NASA Technical Reports Server (NTRS)
Acquista, C.
1978-01-01
The first-order smoothing approximation yields integral equations for the mean and the two-point correlation function of a wave in a random medium. A method is presented for the approximate solution of these equations that combines features of the eiconal approximation and of the Born expansion. This method is applied to the problem of reflection and transmission of a plane wave by a slab of a random medium. Both the mean wave and the covariance are calculated to determine the reflected and transmitted amplitudes and intensities.
Cover times of random searches
NASA Astrophysics Data System (ADS)
Chupeau, Marie; Bénichou, Olivier; Voituriez, Raphaël
2015-10-01
How long must one undertake a random search to visit all sites of a given domain? This time, known as the cover time, is a key observable to quantify the efficiency of exhaustive searches, which require a complete exploration of an area and not only the discovery of a single target. Examples range from immune-system cells chasing pathogens to animals harvesting resources, from robotic exploration for cleaning or demining to the task of improving search algorithms. Despite its broad relevance, the cover time has remained elusive and so far explicit results have been scarce and mostly limited to regular random walks. Here we determine the full distribution of the cover time for a broad range of random search processes, including Lévy strategies, intermittent strategies, persistent random walks and random walks on complex networks, and reveal its universal features. We show that for all these examples the mean cover time can be minimized, and that the corresponding optimal strategies also minimize the mean search time for a single target, unambiguously pointing towards their robustness.
Random root movements in weightlessness.
Johnsson, A; Karlsson, C; Iversen, T H; Chapman, D K
1996-02-01
The dynamics of root growth was studied in weightlessness. In the absence of the gravitropic reference direction during weightlessness, root movements could be controlled by spontaneous growth processes, without any corrective growth induced by the gravitropic system. If truly random of nature, the bending behavior should follow so-called 'random walk' mathematics during weightlessness. Predictions from this hypothesis were critically tested. In a Spacelab ESA-experiment, denoted RANDOM and carried out during the IML-2 Shuttle flight in July 1994, the growth of garden cress (Lepidium sativum) roots was followed by time lapse photography at 1-h intervals. The growth pattern was recorded for about 20 h. Root growth was significantly smaller in weightlessness as compared to gravity (control) conditions. It was found that the roots performed spontaneous movements in weightlessness. The average direction of deviation of the plants consistently stayed equal to zero, despite these spontaneous movements. The average squared deviation increased linearly with time as predicted theoretically (but only for 8-10 h). Autocorrelation calculations showed that bendings of the roots, as determined from the 1-h photographs, were uncorrelated after about a 2-h interval. It is concluded that random processes play an important role in root growth. Predictions from a random walk hypothesis as to the growth dynamics could explain parts of the growth patterns recorded. This test of the hypothesis required microgravity conditions as provided for in a space experiment. PMID:11541141
Propagation in multiscale random media
NASA Astrophysics Data System (ADS)
Balk, Alexander M.
2003-10-01
Many studies consider media with microstructure, which has variations on some microscale, while the macroproperties are under investigation. Sometimes the medium has several microscales, all of them being much smaller than the macroscale. Sometimes the variations on the macroscale are also included, which are taken into account by some procedures, like WKB or geometric optics. What if the medium has variations on all scales from microscale to macroscale? This situation occurs in several practical problems. The talk is about such situations, in particular, passive tracer in a random velocity field, wave propagation in a random medium, Schrödinger equation with random potential. To treat such problems we have developed the statistical near-identity transformation. We find anomalous attenuation of the pulse propagating in a multiscale medium.
Relatively Random: Context Effects on Perceived Randomness and Predicted Outcomes
ERIC Educational Resources Information Center
Matthews, William J.
2013-01-01
This article concerns the effect of context on people's judgments about sequences of chance outcomes. In Experiment 1, participants judged whether sequences were produced by random, mechanical processes (such as a roulette wheel) or skilled human action (such as basketball shots). Sequences with lower alternation rates were judged more likely to…
A Randomized Experiment Comparing Random and Cutoff-Based Assignment
ERIC Educational Resources Information Center
Shadish, William R.; Galindo, Rodolfo; Wong, Vivian C.; Steiner, Peter M.; Cook, Thomas D.
2011-01-01
In this article, we review past studies comparing randomized experiments to regression discontinuity designs, mostly finding similar results, but with significant exceptions. The latter might be due to potential confounds of study characteristics with assignment method or with failure to estimate the same parameter over methods. In this study, we…
Molecular random tilings as glasses
Garrahan, Juan P.; Stannard, Andrew; Blunt, Matthew O.; Beton, Peter H.
2009-01-01
We have recently shown that p-terphenyl-3,5,3′,5′-tetracarboxylic acid adsorbed on graphite self-assembles into a two-dimensional rhombus random tiling. This tiling is close to ideal, displaying long-range correlations punctuated by sparse localized tiling defects. In this article we explore the analogy between dynamic arrest in this type of random tilings and that of structural glasses. We show that the structural relaxation of these systems is via the propagation–reaction of tiling defects, giving rise to dynamic heterogeneity. We study the scaling properties of the dynamics and discuss connections with kinetically constrained models of glasses. PMID:19720990
Mode statistics in random lasers
Zaitsev, Oleg
2006-12-15
Representing an ensemble of random lasers with an ensemble of random matrices, we compute the average number of lasing modes and its fluctuations. The regimes of weak and strong coupling of the passive resonator to the environment are considered. In the latter case, contrary to an earlier claim in the literature, we do not find a power-law dependence of the average mode number on the pump strength. For the relative fluctuations, however, a power law can be established. It is shown that, due to the mode competition, the distribution of the number of excited modes over an ensemble of lasers is not binomial.
Neutron transport in random media
Makai, M.
1996-08-01
The survey reviews the methods available in the literature which allow a discussion of corium recriticality after a severe accident and a characterization of the corium. It appears that to date no one has considered the eigenvalue problem, though for the source problem several approaches have been proposed. The mathematical formulation of a random medium may be approached in different ways. Based on the review of the literature, we can draw three basic conclusions. The problem of static, random perturbations has been solved. The static case is tractable by the Monte Carlo method. There is a specific time dependent case for which the average flux is given as a series expansion.
Synchronizability of random rectangular graphs
Estrada, Ernesto Chen, Guanrong
2015-08-15
Random rectangular graphs (RRGs) represent a generalization of the random geometric graphs in which the nodes are embedded into hyperrectangles instead of on hypercubes. The synchronizability of RRG model is studied. Both upper and lower bounds of the eigenratio of the network Laplacian matrix are determined analytically. It is proven that as the rectangular network is more elongated, the network becomes harder to synchronize. The synchronization processing behavior of a RRG network of chaotic Lorenz system nodes is numerically investigated, showing complete consistence with the theoretical results.
Random organization and plastic depinning
Reichhardt, Charles; Reichhardt, Cynthia
2008-01-01
We provide evidence that the general phenomenon of plastic depinning can be described as an absorbing phase transition, and shows the same features as the random organization which was recently studied in periodically driven particle systems [L. Corte, Nature Phys. 4, 420 (2008)]. In the plastic flow system, the pinned regime corresponds to the absorbing state and the moving state corresponds to the fluctuating state. When an external force is suddenly applied, the system eventually organizes into one of these two states with a time scale that diverges as a power law at a nonequilibrium transition. We propose a simple experiment to test for this transition in systems with random disorder.
Random sequential adsorption of tetramers
NASA Astrophysics Data System (ADS)
Cieśla, Michał
2013-07-01
Adsorption of a tetramer built of four identical spheres was studied numerically using the random sequential adsorption (RSA) algorithm. Tetramers were adsorbed on a two-dimensional, flat and homogeneous surface. Two different models of the adsorbate were investigated: a rhomboid and a square one; monomer centres were put on vertices of rhomboids and squares, respectively. Numerical simulations allow us to establish the maximal random coverage ratio as well as the available surface function (ASF), which is crucial for determining kinetics of the adsorption process. These results were compared with data obtained experimentally for KfrA plasmid adsorption. Additionally, the density autocorrelation function was measured.
2011-01-01
Background Chronic work-related stress is a significant and independent risk factor for cardiovascular and metabolic diseases and associated mortality, particularly when compounded by a sedentary work environment. Heart rate variability (HRV) provides an estimate of parasympathetic and sympathetic autonomic control, and can serve as a marker of physiological stress. Hatha yoga is a physically demanding practice that can help to reduce stress; however, time constraints incurred by work and family life may limit participation. The purpose of the present study is to determine if a 10-week, worksite-based yoga program delivered during lunch hour can improve resting HRV and related physical and psychological parameters in sedentary office workers. Methods and design This is a parallel-arm RCT that will compare the outcomes of participants assigned to the experimental treatment group (yoga) to those assigned to a no-treatment control group. Participants randomized to the experimental condition will engage in a 10-week yoga program delivered at their place of work. The yoga sessions will be group-based, prescribed three times per week during lunch hour, and will be led by an experienced yoga instructor. The program will involve teaching beginner students safely and progressively over 10 weeks a yoga sequence that incorporates asanas (poses and postures), vinyasa (exercises), pranayama (breathing control) and meditation. The primary outcome of this study is the high frequency (HF) spectral power component of HRV (measured in absolute units; i.e. ms2), a measure of parasympathetic autonomic control. Secondary outcomes include additional frequency and time domains of HRV, and measures of physical functioning and psychological health status. Measures will be collected prior to and following the intervention period, and at 6 months follow-up to determine the effect of intervention withdrawal. Discussion This study will determine the effect of worksite-based yoga practice on
A Mixed Effects Randomized Item Response Model
ERIC Educational Resources Information Center
Fox, J.-P.; Wyrick, Cheryl
2008-01-01
The randomized response technique ensures that individual item responses, denoted as true item responses, are randomized before observing them and so-called randomized item responses are observed. A relationship is specified between randomized item response data and true item response data. True item response data are modeled with a (non)linear…
Are quasar redshifts randomly distributed
NASA Technical Reports Server (NTRS)
Weymann, R. J.; Boroson, T.; Scargle, J. D.
1978-01-01
A statistical analysis of possible clumping (not periodicity) of emission line redshifts of QSO's shows the available data to be compatible with random fluctuations of a smooth, non-clumped distribution. This result is demonstrated with Monte Carlo simulations as well as with the Kolmogorov-Smirnov test. It is in complete disagreement with the analysis by Varshni, which is shown to be incorrect.
Randomized Item Response Theory Models
ERIC Educational Resources Information Center
Fox, Jean-Paul
2005-01-01
The randomized response (RR) technique is often used to obtain answers on sensitive questions. A new method is developed to measure latent variables using the RR technique because direct questioning leads to biased results. Within the RR technique is the probability of the true response modeled by an item response theory (IRT) model. The RR…
Common Randomness Principles of Secrecy
ERIC Educational Resources Information Center
Tyagi, Himanshu
2013-01-01
This dissertation concerns the secure processing of distributed data by multiple terminals, using interactive public communication among themselves, in order to accomplish a given computational task. In the setting of a probabilistic multiterminal source model in which several terminals observe correlated random signals, we analyze secure…
Entropy of random entangling surfaces
NASA Astrophysics Data System (ADS)
Solodukhin, Sergey N.
2012-09-01
We consider the situation when a globally defined four-dimensional field system is separated on two entangled sub-systems by a dynamical (random) two-dimensional surface. The reduced density matrix averaged over ensemble of random surfaces of fixed area and the corresponding average entropy are introduced. The average entanglement entropy is analyzed for a generic conformal field theory in four dimensions. Two important particular cases are considered. In the first, both the intrinsic metric on the entangling surface and the spacetime metric are fluctuating. An important example of this type is when the entangling surface is a black hole horizon, the fluctuations of which cause necessarily the fluctuations in the spacetime geometry. In the second case, the spacetime is considered to be fixed. The detailed analysis is carried out for the random entangling surfaces embedded in flat Minkowski spacetime. In all cases, the problem reduces to an effectively two-dimensional problem of random surfaces which can be treated by means of the well-known conformal methods. Focusing on the logarithmic terms in the entropy, we predict the appearance of a new ln ln(A) term. This article is part of a special issue of Journal of Physics A: Mathematical and Theoretical in honour of Stuart Dowker's 75th birthday devoted to ‘Applications of zeta functions and other spectral functions in mathematics and physics’.
NASA Technical Reports Server (NTRS)
Katti, Romney R.
1995-01-01
Random-access memory (RAM) devices of proposed type exploit magneto-optical properties of magnetic garnets exhibiting perpendicular anisotropy. Magnetic writing and optical readout used. Provides nonvolatile storage and resists damage by ionizing radiation. Because of basic architecture and pinout requirements, most likely useful as small-capacity memory devices.
Undecidability Theorem and Quantum Randomness
NASA Astrophysics Data System (ADS)
Berezin, Alexander A.
2005-04-01
As scientific folklore has it, Kurt Godel was once annoyed by question whether he sees any link between his Undecidability Theorem (UT) and Uncertainty Relationship. His reaction, however, may indicate that he probably felt that such a hidden link could indeed exist but he was unable clearly formulate it. Informational version of UT (G.J.Chaitin) states impossibility to rule out algorithmic compressibility of arbitrary digital string. Thus, (mathematical) randomness can only be disproven, not proven. Going from mathematical to physical (mainly quantum) randomness, we encounter seemingly random acts of radioactive decays of isotopes (such as C14), emission of excited atoms, tunneling effects, etc. However, our notion of quantum randomness (QR) may likely hit similarly formidable wall of physical version of UT leading to seemingly bizarre ideas such as Everett many world model (D.Deutsch) or backward causation (J.A.Wheeler). Resolution may potentially lie in admitting some form of Aristotelean final causation (AFC) as an ultimate foundational principle (G.W.Leibniz) connecting purely mathematical (Platonic) grounding aspects with it physically observable consequences, such as plethora of QR effects. Thus, what we interpret as QR may eventually be manifestation of AFC in which UT serves as delivery vehicle. Another example of UT/QR/AFC connection is question of identity (indistinguishability) of elementary particles (are all electrons exactly the same or just approximately so to a very high degree?).
Plated wire random access memories
NASA Technical Reports Server (NTRS)
Gouldin, L. D.
1975-01-01
A program was conducted to construct 4096-work by 18-bit random access, NDRO-plated wire memory units. The memory units were subjected to comprehensive functional and environmental tests at the end-item level to verify comformance with the specified requirements. A technical description of the unit is given, along with acceptance test data sheets.
Models of random graph hierarchies
NASA Astrophysics Data System (ADS)
Paluch, Robert; Suchecki, Krzysztof; Hołyst, Janusz A.
2015-10-01
We introduce two models of inclusion hierarchies: random graph hierarchy (RGH) and limited random graph hierarchy (LRGH). In both models a set of nodes at a given hierarchy level is connected randomly, as in the Erdős-Rényi random graph, with a fixed average degree equal to a system parameter c. Clusters of the resulting network are treated as nodes at the next hierarchy level and they are connected again at this level and so on, until the process cannot continue. In the RGH model we use all clusters, including those of size 1, when building the next hierarchy level, while in the LRGH model clusters of size 1 stop participating in further steps. We find that in both models the number of nodes at a given hierarchy level h decreases approximately exponentially with h. The height of the hierarchy H, i.e. the number of all hierarchy levels, increases logarithmically with the system size N, i.e. with the number of nodes at the first level. The height H decreases monotonically with the connectivity parameter c in the RGH model and it reaches a maximum for a certain c max in the LRGH model. The distribution of separate cluster sizes in the LRGH model is a power law with an exponent about - 1.25. The above results follow from approximate analytical calculations and have been confirmed by numerical simulations.
Universality in random quantum networks
NASA Astrophysics Data System (ADS)
Novotný, Jaroslav; Alber, Gernot; Jex, Igor
2015-12-01
Networks constitute efficient tools for assessing universal features of complex systems. In physical contexts, classical as well as quantum networks are used to describe a wide range of phenomena, such as phase transitions, intricate aspects of many-body quantum systems, or even characteristic features of a future quantum internet. Random quantum networks and their associated directed graphs are employed for capturing statistically dominant features of complex quantum systems. Here, we develop an efficient iterative method capable of evaluating the probability of a graph being strongly connected. It is proven that random directed graphs with constant edge-establishing probability are typically strongly connected, i.e., any ordered pair of vertices is connected by a directed path. This typical topological property of directed random graphs is exploited to demonstrate universal features of the asymptotic evolution of large random qubit networks. These results are independent of our knowledge of the details of the network topology. These findings suggest that other highly complex networks, such as a future quantum internet, may also exhibit similar universal properties.
Entanglement generation of nearly random operators.
Weinstein, Yaakov S; Hellberg, C Stephen
2005-07-15
We study the entanglement generation of operators whose statistical properties approach those of random matrices but are restricted in some way. These include interpolating ensemble matrices, where the interval of the independent random parameters are restricted, pseudorandom operators, where there are far fewer random parameters than required for random matrices, and quantum chaotic evolution. Restricting randomness in different ways allows us to probe connections between entanglement and randomness. We comment on which properties affect entanglement generation and discuss ways of efficiently producing random states on a quantum computer. PMID:16090726
Random trinomial tree models and vanilla options
NASA Astrophysics Data System (ADS)
Ganikhodjaev, Nasir; Bayram, Kamola
2013-09-01
In this paper we introduce and study random trinomial model. The usual trinomial model is prescribed by triple of numbers (u, d, m). We call the triple (u, d, m) an environment of the trinomial model. A triple (Un, Dn, Mn), where {Un}, {Dn} and {Mn} are the sequences of independent, identically distributed random variables with 0 < Dn < 1 < Un and Mn = 1 for all n, is called a random environment and trinomial tree model with random environment is called random trinomial model. The random trinomial model is considered to produce more accurate results than the random binomial model or usual trinomial model.
Random density matrices versus random evolution of open system
NASA Astrophysics Data System (ADS)
Pineda, Carlos; Seligman, Thomas H.
2015-10-01
We present and compare two families of ensembles of random density matrices. The first, static ensemble, is obtained foliating an unbiased ensemble of density matrices. As criterion we use fixed purity as the simplest example of a useful convex function. The second, dynamic ensemble, is inspired in random matrix models for decoherence where one evolves a separable pure state with a random Hamiltonian until a given value of purity in the central system is achieved. Several families of Hamiltonians, adequate for different physical situations, are studied. We focus on a two qubit central system, and obtain exact expressions for the static case. The ensemble displays a peak around Werner-like states, modulated by nodes on the degeneracies of the density matrices. For moderate and strong interactions good agreement between the static and the dynamic ensembles is found. Even in a model where one qubit does not interact with the environment excellent agreement is found, but only if there is maximal entanglement with the interacting one. The discussion is started recalling similar considerations for scattering theory. At the end, we comment on the reach of the results for other convex functions of the density matrix, and exemplify the situation with the von Neumann entropy.
Cognitive-Behavioral Treatment of Insomnia and Depression in Adolescents: A Pilot Randomized Trial
Clarke, Greg; McGlinchey, Eleanor L.; Hein, Kerrie; Gullion, Christina M.; Dickerson, John F.; Leo, Michael C.; Harvey, Allison G.
2015-01-01
We tested whether augmenting conventional depression treatment in youth by treating sleep issues with cognitive behavioral therapy for insomnia (CBT-I) improved depression outcomes. We randomized youth 12–20 years of age to 10 weekly sessions of a sleep hygiene control condition (SH) combined with CBT for depression (CBT-D) (n=20), or an experimental condition consisting of CBT-I combined with CBT-D (n=21). We assessed outcomes through 26 weeks of follow-up and found medium-large effects favoring the experimental CBT-I arm on some sleep outcomes (actigraphy total sleep time and Insomnia Severity Index “caseness”) and depression outcomes (higher percentage recovered, faster time to recovery), but little effect on other measures. Total sleep time improved by 99 minutes from baseline to week 12 in the CBT-I arm, but not in the SH arm. In addition, our pilot yielded important products to facilitate future studies: the youth-adapted CBT-I program; the study protocol; estimates of recruitment, retention, and attrition; and performance and parameters of candidate outcome measures. PMID:25917009
A Randomized Trial of Contingency Management for Adolescent Marijuana Abuse and Dependence
Stanger, Catherine; Budney, Alan J.; Kamon, Jody L.; Thostensen, Jeff
2009-01-01
An initial efficacy test of an innovative behavioral outpatient treatment model for adolescents with problematic use of marijuana enrolled 69 adolescents, aged 14–18, and randomly assigned them to one of two treatment conditions. Both conditions received individualized Motivational Enhancement and Cognitive Behavioral Therapy (MET/CBT) and a twice-weekly drug-testing program. The experimental contingency management condition involved a clinic delivered, abstinence-based incentive program, and weekly behavioral parent training sessions that included a parent-delivered, abstinence-based, substance monitoring contract. The comparison condition included an attendance-based incentive program, and weekly psychoeducational parent sessions. Follow-up assessments were performed at 3, 6, 9 months post-treatment. The experimental condition showed greater marijuana abstinence during treatment, e.g., 7.6 vs. 5.1 continuous weeks and 50% vs. 18% achieved ≥ 10 weeks of abstinence. Improvements were found in parenting and youth psychopathology across treatment conditions, and improvements in negative parenting uniquely predicted post treatment abstinence. The outcomes observed in the experimental condition are consistent with adult substance dependence treatment literature, and suggest that integrating CM abstinence-based approaches with other empirically-based outpatient interventions provides an alternative and efficacious treatment model for adolescent substance abuse/dependence. Replication and continued development of more potent interventions remain needed to further advance the development of effective substance abuse treatments for adolescents. PMID:19717250
Service learning in medical and nursing training: a randomized controlled trial.
Leung, A Y M; Chan, S S C; Kwan, C W; Cheung, M K T; Leung, S S K; Fong, D Y T
2012-10-01
The purpose of this study was to explore the long term effect of a service learning project on medical and nursing students' knowledge in aging and their attitudes toward older adults. A total of 124 students were recruited and then randomized to intervention group (IG) and control group (CG). A pre-and-post-intervention design measured students' knowledge in aging (using modified Palmore's Fact on Aging Quiz) and attitudes toward older adults (using Kogan's Old People Scale). A total of 103 students completed all the activities and questionnaires. After the intervention, there were significant differences between the IG and CG on Palmore's mental health (MH) (P = .04), Palmore's total score (P = .02) and Kogan's negative attitudes toward older adults (P = .001). All students increased their positive attitude toward older adults after the intervention. However, both the IG and CG showed a decrease in positive attitudes 1 month after the interventon, and such decrease varied, depending on the programme which students attended. The current study showed that the 10-week service learning activities significantly increased medical and nursing students' overall knowledge of aging and their understanding of mental health needs in old age, and reduced their negative attitudes toward older adults. However, the effect is not long-lasting. On the other hand, its effect on positive attitudes toward older adults cannot be concluded. Periodic contacts with older adults via service learning activities may be needed to sustain attitude change toward older adults. PMID:21964953
Nordin, Sara; Carlbring, Per; Cuijpers, Pim; Andersson, Gerhard
2010-09-01
Cognitive behavioral bibliotherapy for panic disorder has been found to be less effective without therapist support. In this study, participants were randomized to either unassisted bibliotherapy (n=20) with a scheduled follow-up telephone interview or to a waiting list control group (n=19). Following a structured psychiatric interview, participants in the treatment group were sent a self-help book consisting of 10 chapters based on cognitive behavioral strategies for the treatment of panic disorder. No therapist contact of any kind was provided during the treatment phase, which lasted for 10 weeks. Results showed that the treatment group had, in comparison to the control group, improved on all outcome measures at posttreatment and at 3-month follow-up. The tentative conclusion drawn from these results is that pure bibliotherapy with a clear deadline can be effective for people suffering from panic disorder with or without agoraphobia. PMID:20569776
Optimal randomized scheduling by replacement
Saias, I.
1996-05-01
In the replacement scheduling problem, a system is composed of n processors drawn from a pool of p. The processors can become faulty while in operation and faulty processors never recover. A report is issued whenever a fault occurs. This report states only the existence of a fault but does not indicate its location. Based on this report, the scheduler can reconfigure the system and choose another set of n processors. The system operates satisfactorily as long as, upon report of a fault, the scheduler chooses n non-faulty processors. We provide a randomized protocol maximizing the expected number of faults the system can sustain before the occurrence of a crash. The optimality of the protocol is established by considering a closely related dual optimization problem. The game-theoretic technical difficulties that we solve in this paper are very general and encountered whenever proving the optimality of a randomized algorithm in parallel and distributed computation.
Random errors in egocentric networks.
Almquist, Zack W
2012-10-01
The systematic errors that are induced by a combination of human memory limitations and common survey design and implementation have long been studied in the context of egocentric networks. Despite this, little if any work exists in the area of random error analysis on these same networks; this paper offers a perspective on the effects of random errors on egonet analysis, as well as the effects of using egonet measures as independent predictors in linear models. We explore the effects of false-positive and false-negative error in egocentric networks on both standard network measures and on linear models through simulation analysis on a ground truth egocentric network sample based on facebook-friendships. Results show that 5-20% error rates, which are consistent with error rates known to occur in ego network data, can cause serious misestimation of network properties and regression parameters. PMID:23878412
Random errors in egocentric networks
Almquist, Zack W.
2013-01-01
The systematic errors that are induced by a combination of human memory limitations and common survey design and implementation have long been studied in the context of egocentric networks. Despite this, little if any work exists in the area of random error analysis on these same networks; this paper offers a perspective on the effects of random errors on egonet analysis, as well as the effects of using egonet measures as independent predictors in linear models. We explore the effects of false-positive and false-negative error in egocentric networks on both standard network measures and on linear models through simulation analysis on a ground truth egocentric network sample based on facebook-friendships. Results show that 5–20% error rates, which are consistent with error rates known to occur in ego network data, can cause serious misestimation of network properties and regression parameters. PMID:23878412
Percolation on correlated random networks
NASA Astrophysics Data System (ADS)
Agliari, E.; Cioli, C.; Guadagnini, E.
2011-09-01
We consider a class of random, weighted networks, obtained through a redefinition of patterns in an Hopfield-like model, and, by performing percolation processes, we get information about topology and resilience properties of the networks themselves. Given the weighted nature of the graphs, different kinds of bond percolation can be studied: stochastic (deleting links randomly) and deterministic (deleting links based on rank weights), each mimicking a different physical process. The evolution of the network is accordingly different, as evidenced by the behavior of the largest component size and of the distribution of cluster sizes. In particular, we can derive that weak ties are crucial in order to maintain the graph connected and that, when they are the most prone to failure, the giant component typically shrinks without abruptly breaking apart; these results have been recently evidenced in several kinds of social networks.
Random modelling of contagious diseases.
Demongeot, J; Hansen, O; Hessami, H; Jannot, A S; Mintsa, J; Rachdi, M; Taramasco, C
2013-03-01
Modelling contagious diseases needs to include a mechanistic knowledge about contacts between hosts and pathogens as specific as possible, e.g., by incorporating in the model information about social networks through which the disease spreads. The unknown part concerning the contact mechanism can be modelled using a stochastic approach. For that purpose, we revisit SIR models by introducing first a microscopic stochastic version of the contacts between individuals of different populations (namely Susceptible, Infective and Recovering), then by adding a random perturbation in the vicinity of the endemic fixed point of the SIR model and eventually by introducing the definition of various types of random social networks. We propose as example of application to contagious diseases the HIV, and we show that a micro-simulation of individual based modelling (IBM) type can reproduce the current stable incidence of the HIV epidemic in a population of HIV-positive men having sex with men (MSM). PMID:23525763
Image segmentation using random features
NASA Astrophysics Data System (ADS)
Bull, Geoff; Gao, Junbin; Antolovich, Michael
2014-01-01
This paper presents a novel algorithm for selecting random features via compressed sensing to improve the performance of Normalized Cuts in image segmentation. Normalized Cuts is a clustering algorithm that has been widely applied to segmenting images, using features such as brightness, intervening contours and Gabor filter responses. Some drawbacks of Normalized Cuts are that computation times and memory usage can be excessive, and the obtained segmentations are often poor. This paper addresses the need to improve the processing time of Normalized Cuts while improving the segmentations. A significant proportion of the time in calculating Normalized Cuts is spent computing an affinity matrix. A new algorithm has been developed that selects random features using compressed sensing techniques to reduce the computation needed for the affinity matrix. The new algorithm, when compared to the standard implementation of Normalized Cuts for segmenting images from the BSDS500, produces better segmentations in significantly less time.
The weighted random graph model
NASA Astrophysics Data System (ADS)
Garlaschelli, Diego
2009-07-01
We introduce the weighted random graph (WRG) model, which represents the weighted counterpart of the Erdos-Renyi random graph and provides fundamental insights into more complicated weighted networks. We find analytically that the WRG is characterized by a geometric weight distribution, a binomial degree distribution and a negative binomial strength distribution. We also characterize exactly the percolation phase transitions associated with edge removal and with the appearance of weighted subgraphs of any order and intensity. We find that even this completely null model displays a percolation behaviour similar to what is observed in real weighted networks, implying that edge removal cannot be used to detect community structure empirically. By contrast, the analysis of clustering successfully reveals different patterns between the WRG and real networks.
Random drift and culture change.
Bentley, R Alexander; Hahn, Matthew W; Shennan, Stephen J
2004-07-22
We show that the frequency distributions of cultural variants, in three different real-world examples--first names, archaeological pottery and applications for technology patents--follow power laws that can be explained by a simple model of random drift. We conclude that cultural and economic choices often reflect a decision process that is value-neutral; this result has far-reaching testable implications for social-science research. PMID:15306315
Random drift and culture change.
Bentley, R. Alexander; Hahn, Matthew W.; Shennan, Stephen J.
2004-01-01
We show that the frequency distributions of cultural variants, in three different real-world examples--first names, archaeological pottery and applications for technology patents--follow power laws that can be explained by a simple model of random drift. We conclude that cultural and economic choices often reflect a decision process that is value-neutral; this result has far-reaching testable implications for social-science research. PMID:15306315
Randomized gap and amplitude estimation
NASA Astrophysics Data System (ADS)
Zintchenko, Ilia; Wiebe, Nathan
2016-06-01
We provide a method for estimating spectral gaps in low-dimensional systems. Unlike traditional phase estimation, our approach does not require ancillary qubits nor does it require well-characterized gates. Instead, it only requires the ability to perform approximate Haar random unitary operations, applying the unitary whose eigenspectrum is sought and performing measurements in the computational basis. We discuss application of these ideas to in-place amplitude estimation and quantum device calibration.
Correlated randomness and switching phenomena
NASA Astrophysics Data System (ADS)
Stanley, H. E.; Buldyrev, S. V.; Franzese, G.; Havlin, S.; Mallamace, F.; Kumar, P.; Plerou, V.; Preis, T.
2010-08-01
One challenge of biology, medicine, and economics is that the systems treated by these serious scientific disciplines have no perfect metronome in time and no perfect spatial architecture-crystalline or otherwise. Nonetheless, as if by magic, out of nothing but randomness one finds remarkably fine-tuned processes in time and remarkably fine-tuned structures in space. Further, many of these processes and structures have the remarkable feature of “switching” from one behavior to another as if by magic. The past century has, philosophically, been concerned with placing aside the human tendency to see the universe as a fine-tuned machine. Here we will address the challenge of uncovering how, through randomness (albeit, as we shall see, strongly correlated randomness), one can arrive at some of the many spatial and temporal patterns in biology, medicine, and economics and even begin to characterize the switching phenomena that enables a system to pass from one state to another. Inspired by principles developed by A. Nihat Berker and scores of other statistical physicists in recent years, we discuss some applications of correlated randomness to understand switching phenomena in various fields. Specifically, we present evidence from experiments and from computer simulations supporting the hypothesis that water’s anomalies are related to a switching point (which is not unlike the “tipping point” immortalized by Malcolm Gladwell), and that the bubbles in economic phenomena that occur on all scales are not “outliers” (another Gladwell immortalization). Though more speculative, we support the idea of disease as arising from some kind of yet-to-be-understood complex switching phenomenon, by discussing data on selected examples, including heart disease and Alzheimer disease.
Rouhani, Mohammad Hossein; Kelishadi, Roya; Hashemipour, Mahin; Esmaillzadeh, Ahmad
2013-01-01
Although several studies have assessed the influence of the glycemic index on body weight and blood pressure among adults, limited evidence exists for the pediatric age population. In the current study, we compared the effects of low glycemic index (LGI) diet to the healthy nutritional recommendation (HNR)-based diet on obesity and blood pressure among adolescent girls in pubertal ages. This 10-week parallel randomized clinical trial comprised of 50 overweight or obese and sexually mature girls less than 18 years of age years, who were randomly assigned to LGI or HNR-based diet. Macronutrient distribution was equivalently prescribed in both groups. Blood pressure, weight and waist circumference were measured at baseline and after intervention. Of the 50 participants, 41 subjects (include 82%) completed the study. The GI of the diet in the LGI group was 42.67 ± 0.067. A within-group analysis illustrated that in comparison to the baseline values, the body weight and body mass index (not waist circumference and blood pressure) decreased significantly after the intervention in both groups (P = 0.0001). The percent changes of the body weight status, waist circumference and blood pressure were compared between the two groups and the findings did not show any difference between the LGI diet consumers and those in the HNR group. In comparison to the HNR, LGI diet could not change the weight and blood pressure following a 10-week intervention. Further longitudinal studies with a long-term follow up should be conducted in this regard. PMID:24133618
Resolution analysis by random probing
NASA Astrophysics Data System (ADS)
Simutė, S.; Fichtner, A.; van Leeuwen, T.
2015-12-01
We develop and apply methods for resolution analysis in tomography, based on stochastic probing of the Hessian or resolution operators. Key properties of our methods are (i) low algorithmic complexity and easy implementation, (ii) applicability to any tomographic technique, including full-waveform inversion and linearized ray tomography, (iii) applicability in any spatial dimension and to inversions with a large number of model parameters, (iv) low computational costs that are mostly a fraction of those required for synthetic recovery tests, and (v) the ability to quantify both spatial resolution and inter-parameter trade-offs. Using synthetic full-waveform inversions as benchmarks, we demonstrate that auto-correlations of random-model applications to the Hessian yield various resolution measures, including direction- and position-dependent resolution lengths, and the strength of inter-parameter mappings. We observe that the required number of random test models is around 5 in one, two and three dimensions. This means that the proposed resolution analyses are not only more meaningful than recovery tests but also computationally less expensive. We demonstrate the applicability of our method in 3D real-data full-waveform inversions for the western Mediterranean and Japan. In addition to tomographic problems, resolution analysis by random probing may be used in other inverse methods that constrain continuously distributed properties, including electromagnetic and potential-field inversions, as well as recently emerging geodynamic data assimilation.
Resolution analysis by random probing
NASA Astrophysics Data System (ADS)
Fichtner, Andreas; Leeuwen, Tristan van
2015-08-01
We develop and apply methods for resolution analysis in tomography, based on stochastic probing of the Hessian or resolution operators. Key properties of our methods are (i) low algorithmic complexity and easy implementation, (ii) applicability to any tomographic technique, including full-waveform inversion and linearized ray tomography, (iii) applicability in any spatial dimension and to inversions with a large number of model parameters, (iv) low computational costs that are mostly a fraction of those required for synthetic recovery tests, and (v) the ability to quantify both spatial resolution and interparameter trade-offs. Using synthetic full-waveform inversions as benchmarks, we demonstrate that autocorrelations of random-model applications to the Hessian yield various resolution measures, including direction- and position-dependent resolution lengths and the strength of interparameter mappings. We observe that the required number of random test models is around five in one, two, and three dimensions. This means that the proposed resolution analyses are not only more meaningful than recovery tests but also computationally less expensive. We demonstrate the applicability of our method in a 3-D real-data full-waveform inversion for the western Mediterranean. In addition to tomographic problems, resolution analysis by random probing may be used in other inverse methods that constrain continuously distributed properties, including electromagnetic and potential-field inversions, as well as recently emerging geodynamic data assimilation.
Topological insulators in random potentials
NASA Astrophysics Data System (ADS)
Pieper, Andreas; Fehske, Holger
2016-01-01
We investigate the effects of magnetic and nonmagnetic impurities on the two-dimensional surface states of three-dimensional topological insulators (TIs). Modeling weak and strong TIs using a generic four-band Hamiltonian, which allows for a breaking of inversion and time-reversal symmetries and takes into account random local potentials as well as the Zeeman and orbital effects of external magnetic fields, we compute the local density of states, the single-particle spectral function, and the conductance for a (contacted) slab geometry by numerically exact techniques based on kernel polynomial expansion and Green's function approaches. We show that bulk disorder refills the surface-state Dirac gap induced by a homogeneous magnetic field with states, whereas orbital (Peierls-phase) disorder preserves the gap feature. The former effect is more pronounced in weak TIs than in strong TIs. At moderate randomness, disorder-induced conducting channels appear in the surface layer, promoting diffusive metallicity. Random Zeeman fields rapidly destroy any conducting surface states. Imprinting quantum dots on a TI's surface, we demonstrate that carrier transport can be easily tuned by varying the gate voltage, even to the point where quasibound dot states may appear.
Approximating random quantum optimization problems
NASA Astrophysics Data System (ADS)
Hsu, B.; Laumann, C. R.; Läuchli, A. M.; Moessner, R.; Sondhi, S. L.
2013-06-01
We report a cluster of results regarding the difficulty of finding approximate ground states to typical instances of the quantum satisfiability problem k-body quantum satisfiability (k-QSAT) on large random graphs. As an approximation strategy, we optimize the solution space over “classical” product states, which in turn introduces a novel autonomous classical optimization problem, PSAT, over a space of continuous degrees of freedom rather than discrete bits. Our central results are (i) the derivation of a set of bounds and approximations in various limits of the problem, several of which we believe may be amenable to a rigorous treatment; (ii) a demonstration that an approximation based on a greedy algorithm borrowed from the study of frustrated magnetism performs well over a wide range in parameter space, and its performance reflects the structure of the solution space of random k-QSAT. Simulated annealing exhibits metastability in similar “hard” regions of parameter space; and (iii) a generalization of belief propagation algorithms introduced for classical problems to the case of continuous spins. This yields both approximate solutions, as well as insights into the free energy “landscape” of the approximation problem, including a so-called dynamical transition near the satisfiability threshold. Taken together, these results allow us to elucidate the phase diagram of random k-QSAT in a two-dimensional energy-density-clause-density space.
Decoupling with Random Quantum Circuits
NASA Astrophysics Data System (ADS)
Brown, Winton; Fawzi, Omar
2015-12-01
Decoupling has become a central concept in quantum information theory, with applications including proving coding theorems, randomness extraction and the study of conditions for reaching thermal equilibrium. However, our understanding of the dynamics that lead to decoupling is limited. In fact, the only families of transformations that are known to lead to decoupling are (approximate) unitary two-designs, i.e., measures over the unitary group that behave like the Haar measure as far as the first two moments are concerned. Such families include for example random quantum circuits with O( n 2) gates, where n is the number of qubits in the system under consideration. In fact, all known constructions of decoupling circuits use Ω( n 2) gates. Here, we prove that random quantum circuits with O( n log2 n) gates satisfy an essentially optimal decoupling theorem. In addition, these circuits can be implemented in depth O(log3 n). This proves that decoupling can happen in a time that scales polylogarithmically in the number of particles in the system, provided all the particles are allowed to interact. Our proof does not proceed by showing that such circuits are approximate two-designs in the usual sense, but rather we directly analyze the decoupling property.
Morin, Mélanie; Dumoulin, Chantale; Bergeron, Sophie; Mayrand, Marie-Hélène; Khalifé, Samir; Waddell, Guy; Dubois, Marie-France
2016-01-01
Provoked vestibulodynia (PVD) is a highly prevalent and debilitating condition yet its management relies mainly on non-empirically validated interventions. Among the many causes of PVD, there is growing evidence that pelvic floor muscle (PFM) dysfunctions play an important role in its pathophysiology. Multimodal physiotherapy, which addresses these dysfunctions, is judged by experts to be highly effective and is recommended as a first-line treatment. However, the effectiveness of this promising intervention has been evaluated through only two small uncontrolled trials. The proposed bi-center, single-blind, parallel group, randomized controlled trial (RCT) aims to evaluate the efficacy of multimodal physiotherapy and compare it to a frequently used first-line treatment, topical overnight application of lidocaine, in women with PVD. A total of 212 women diagnosed with PVD according to a standardized protocol were eligible for the study and were randomly assigned to either multimodal physiotherapy or lidocaine treatment for 10weeks. The primary outcome measure is pain during intercourse (assessed with a numerical rating scale). Secondary measures include sexual function, pain quality, psychological factors (including pain catastrophizing, anxiety, depression and fear of pain), PFM morphology and function, and patients' global impression of change. Assessments are made at baseline, post-treatment and at the 6-month follow-up. This manuscript presents and discusses the rationale, design and methodology of the first RCT investigating physiotherapy in comparison to a commonly prescribed first-line treatment, overnight topical lidocaine, for women with PVD. PMID:26600287
Thøgersen-Ntoumani, C; Loughren, E A; Kinnafick, F-E; Taylor, I M; Duda, J L; Fox, K R
2015-12-01
Physical activity may regulate affective experiences at work, but controlled studies are needed and there has been a reliance on retrospective accounts of experience. The purpose of the present study was to examine the effect of lunchtime walks on momentary work affect at the individual and group levels. Physically inactive employees (N = 56; M age = 47.68; 92.86% female) from a large university in the UK were randomized to immediate treatment or delayed treatment (DT). The DT participants completed both a control and intervention period. During the intervention period, participants partook in three weekly 30-min lunchtime group-led walks for 10 weeks. They completed twice daily affective reports at work (morning and afternoon) using mobile phones on two randomly chosen days per week. Multilevel modeling was used to analyze the data. Lunchtime walks improved enthusiasm, relaxation, and nervousness at work, although the pattern of results differed depending on whether between-group or within-person analyses were conducted. The intervention was effective in changing some affective states and may have broader implications for public health and workplace performance. PMID:25559067
Peabody, John W; Shimkhada, Riti; Quimbo, Stella; Solon, Orville; Javier, Xylee; McCulloch, Charles
2014-08-01
Improving clinical performance using measurement and payment incentives, including pay for performance (or P4P), has, so far, shown modest to no benefit on patient outcomes. Our objective was to assess the impact of a P4P programme on paediatric health outcomes in the Philippines. We used data from the Quality Improvement Demonstration Study. In this study, the P4P intervention, introduced in 2004, was randomly assigned to 10 community district hospitals, which were matched to 10 control sites. At all sites, physician quality was measured using Clinical Performance Vignettes (CPVs) among randomly selected physicians every 6 months over a 36-month period. In the hospitals randomized to the P4P intervention, physicians received bonus payments if they met qualifying scores on the CPV. We measured health outcomes 4-10 weeks after hospital discharge among children 5 years of age and under who had been hospitalized for diarrhoea and pneumonia (the two most common illnesses affecting this age cohort) and had been under the care of physicians participating in the study. Health outcomes data collection was done at baseline/pre-intervention and 2 years post-intervention on the following post-discharge outcomes: (1) age-adjusted wasting, (2) C-reactive protein in blood, (3) haemoglobin level and (4) parental assessment of child's health using general self-reported health (GSRH) measure. To evaluate changes in health outcomes in the control vs intervention sites over time (baseline vs post-intervention), we used a difference-in-difference logistic regression analysis, controlling for potential confounders. We found an improvement of 7 and 9 percentage points in GSRH and wasting over time (post-intervention vs baseline) in the intervention sites relative to the control sites (P ≤ 0.001). The results from this randomized social experiment indicate that the introduction of a performance-based incentive programme, which included measurement and feedback, led to improvements in
Gabriels, Robin L.; Pan, Zhaoxing; Dechant, Briar; Agnew, John A.; Brim, Natalie; Mesibov, Gary
2015-01-01
Objective This study expands previous equine-assisted intervention research by evaluating the effectiveness of therapeutic horseback riding (THR) on self-regulation, socialization, communication, adaptive, and motor behaviors in children with autism spectrum disorder (ASD). Method Participants with ASD (ages 6–16 years; N=127) were stratified by nonverbal IQ standard scores (≤ 85 or > 85) and randomized to one of two groups for 10 weeks: THR intervention or a barn activity (BA) control group without horses that employed similar methods. The fidelity of the THR intervention was monitored. Participants were evaluated within one month pre- and post-intervention by raters blind to intervention conditions and unblinded caregiver questionnaires. During the intervention, caregivers rated participants’ behaviors weekly. Results Intent-to-treat analysis conducted on the 116 participants who completed a baseline assessment (THR n = 58; BA control n = 58) revealed significant improvements in the THR group compared to the control on measures of irritability (primary outcome) (p=.002; effect size [ES]=.50) and hyperactivity (p=.001; ES=0.53), beginning by week five of the intervention. Significant improvements in the THR group were also observed on a measure of social cognition (p=.05, ES=.41) and social communication (p=.003; ES =.63), along with the total number of words (p=.01; ES=.54) and new words (p=.01; ES=.54) spoken during a standardized language sample. Sensitivity analyses adjusting for age, IQ, and per-protocol analyses produced consistent results. Conclusion This is the first large-scale randomized, controlled trial demonstrating efficacy of THR for the ASD population, and findings are consistent with previous equine-assisted intervention studies. Clinical trial registration information Trial of Therapeutic Horseback Riding in Children and Adolescents With Autism Spectrum Disorder; http://clinicaltrials.gov/; NCT02301195. PMID:26088658
Grøndahl, Jan Robert; Rosvold, Elin Olaug
2008-01-01
Background Hypnosis treatment in general practice is a rather new concept. This pilot study was performed to evaluate the effect of a standardized hypnosis treatment used in general practice for patients with chronic widespread pain (CWP). Methods The study was designed as a randomized control group-controlled study. Sixteen patients were randomized into a treatment group or a control group, each constituting eight patients. Seven patients in the treatment group completed the schedule. After the control period, five of the patients in the control group also received treatment, making a total of 12 patients having completed the treatment sessions. The intervention group went through a standardized hypnosis treatment with ten consecutive therapeutic sessions once a week, each lasting for about 30 minutes, focusing on ego-strengthening, relaxation, releasing muscular tension and increasing self-efficacy. A questionnaire was developed in order to calibrate the symptoms before and after the 10 weeks period, and the results were interpolated into a scale from 0 to 100, increasing numbers representing increasing suffering. Data were analyzed by means of T-tests. Results The treatment group improved from their symptoms, (change from 62.5 to 55.4), while the control group deteriorated, (change from 37.2 to 45.1), (p = 0,045). The 12 patients who completed the treatment showed a mean improvement from 51.5 to 41.6. (p = 0,046). One year later the corresponding result was 41.3, indicating a persisting improvement. Conclusion The study indicates that hypnosis treatment may have a positive effect on pain and quality of life for patients with chronic muscular pain. Considering the limited number of patients, more studies should be conducted to confirm the results. Trial Registration The study was registered in ClinicalTrials.gov and released 27.08.07 Reg nr NCT00521807 Approval Number: 05032001. PMID:18801190
Buller, David B.; Berwick, Marianne; Lantz, Kathy; Buller, Mary Klein; Shane, James; Kane, Ilima; Liu, Xia
2014-01-01
Importance Mobile smart phones are rapidly emerging as an effective means of communicating with many Americans. Using mobile applications, they can access remote databases, track time and location, and integrate user input to provide tailored health information. Objective A smart phone mobile application providing personalized, real-time sun protection advice was evaluated in a randomized trial. Design The trial was conducted in 2012 and had a randomized pretest-posttest controlled design with a 10-week follow-up. Setting Data was collected from a nationwide population-based survey panel. Participants The trial enrolled a sample of n=604 non-Hispanic and Hispanic adults from the Knowledge Panel® aged 18 or older who owned an Android smart phone. Intervention The mobile application provided advice on sun protection (i.e., protection practices and risk of sunburn) and alerts (to apply/reapply sunscreen and get out of the sun), hourly UV Index, and vitamin D production based on the forecast UV Index, phone's time and location, and user input. Main Outcomes and Measures Percent of days using sun protection and time spent outdoors (days and minutes) in the midday sun and number of sunburns in the past 3 months were collected. Results Individuals in the treatment group reported more shade use but less sunscreen use than controls. Those who used the mobile app reported spending less time in the sun and using all protection behaviors combined more. Conclusions and Relevance The mobile application improved some sun protection. Use of the mobile application was lower than expected but associated with increased sun protection. Providing personalized advice when and where people are in the sun may help reduce sun exposure. PMID:25629710
Gudenkauf, Lisa M.; Antoni, Michael H.; Stagl, Jamie M.; Lechner, Suzanne C.; Jutagir, Devika R.; Bouchard, Laura C.; Blomberg, Bonnie B.; Glück, Stefan; Derhagopian, Robert P.; Giron, Gladys L.; Avisar, Eli; Torres-Salichs, Manuel A.; Carver, Charles S.
2015-01-01
Objective Women with breast cancer (BCa) report elevated distress post-surgery. Group-based cognitive-behavioral stress management (CBSM) following surgery improves psychological adaptation, though its key mechanisms remain speculative. This randomized controlled dismantling trial compared two interventions featuring elements thought to drive CBSM effects: a 5-week Cognitive-Behavioral Training (CBT) and 5-week Relaxation Training (RT) vs. a 5-week Health Education (HE) control group. Method Women with stage 0-III BCa (N = 183) were randomized to CBT, RT, or HE condition 2–10 weeks post-surgery. Psychosocial measures were collected at baseline (T1) and post-intervention (T2). Repeated-measures ANOVAs tested whether CBT and RT treatments improved primary measures of psychological adaptation and secondary measures of stress management resource perceptions from pre- to post-intervention relative to HE. Results Both CBT and RT groups reported reduced depressive affect. The CBT group reported improved emotional well-being/quality of life and less cancer-specific thought intrusions. The RT group reported improvements on illness-related social disruption. Regarding stress management resources, the CBT group reported increased reliability of social support networks, while the RT group reported increased confidence in relaxation skills. Psychological adaptation and stress management resource constructs were unchanged in the HE control group. Conclusions Non-metastatic breast cancer patients participating in two forms of brief, 5-week group-based stress management intervention after surgery showed improvements in psychological adaptation and stress management resources compared to an attention-matched control group. Findings provide preliminary support suggesting that using brief group-based stress management interventions may promote adaptation among non-metastatic breast cancer patients. PMID:25939017
Self-correcting random number generator
Humble, Travis S.; Pooser, Raphael C.
2016-09-06
A system and method for generating random numbers. The system may include a random number generator (RNG), such as a quantum random number generator (QRNG) configured to self-correct or adapt in order to substantially achieve randomness from the output of the RNG. By adapting, the RNG may generate a random number that may be considered random regardless of whether the random number itself is tested as such. As an example, the RNG may include components to monitor one or more characteristics of the RNG during operation, and may use the monitored characteristics as a basis for adapting, or self-correcting, to provide a random number according to one or more performance criteria.
Generation of pseudo-random numbers
NASA Technical Reports Server (NTRS)
Howell, L. W.; Rheinfurth, M. H.
1982-01-01
Practical methods for generating acceptable random numbers from a variety of probability distributions which are frequently encountered in engineering applications are described. The speed, accuracy, and guarantee of statistical randomness of the various methods are discussed.
Localization for random and quasiperiodic potentials
NASA Astrophysics Data System (ADS)
Spencer, Thomas
1988-06-01
A survey is made of some recent mathematical results and techniques for Schrödinger operators with random and quasiperiodic potentials. A new proof of localization for random potentials, established in collaboration with H. von Dreifus, is sketched.
High speed optical quantum random number generation.
Fürst, Martin; Weier, Henning; Nauerth, Sebastian; Marangon, Davide G; Kurtsiefer, Christian; Weinfurter, Harald
2010-06-01
We present a fully integrated, ready-for-use quantum random number generator (QRNG) whose stochastic model is based on the randomness of detecting single photons in attenuated light. We show that often annoying deadtime effects associated with photomultiplier tubes (PMT) can be utilized to avoid postprocessing for bias or correlations. The random numbers directly delivered to a PC, generated at a rate of up to 50 Mbit/s, clearly pass all tests relevant for (physical) random number generators. PMID:20588431
The HEART Pathway Randomized Trial
Mahler, Simon A.; Riley, Robert F.; Hiestand, Brian C.; Russell, Gregory B.; Hoekstra, James W.; Lefebvre, Cedric W.; Nicks, Bret A.; Cline, David M.; Askew, Kim L.; Elliott, Stephanie B.; Herrington, David M.; Burke, Gregory L.; Miller, Chadwick D.
2015-01-01
Background The HEART Pathway is a decision aid designed to identify emergency department patients with acute chest pain for early discharge. No randomized trials have compared the HEART Pathway with usual care. Methods and Results Adult emergency department patients with symptoms related to acute coronary syndrome without ST-elevation on ECG (n=282) were randomized to the HEART Pathway or usual care. In the HEART Pathway arm, emergency department providers used the HEART score, a validated decision aid, and troponin measures at 0 and 3 hours to identify patients for early discharge. Usual care was based on American College of Cardiology/American Heart Association guidelines. The primary outcome, objective cardiac testing (stress testing or angiography), and secondary outcomes, index length of stay, early discharge, and major adverse cardiac events (death, myocardial infarction, or coronary revascularization), were assessed at 30 days by phone interview and record review. Participants had a mean age of 53 years, 16% had previous myocardial infarction, and 6% (95% confidence interval, 3.6%–9.5%) had major adverse cardiac events within 30 days of randomization. Compared with usual care, use of the HEART Pathway decreased objective cardiac testing at 30 days by 12.1% (68.8% versus 56.7%; P=0.048) and length of stay by 12 hours (9.9 versus 21.9 hours; P=0.013) and increased early discharges by 21.3% (39.7% versus 18.4%; P<0.001). No patients identified for early discharge had major adverse cardiac events within 30 days. Conclusions The HEART Pathway reduces objective cardiac testing during 30 days, shortens length of stay, and increases early discharges. These important efficiency gains occurred without any patients identified for early discharge suffering MACE at 30 days. PMID:25737484
Random bearings and their stability.
Mahmoodi Baram, Reza; Herrmann, Hans J
2005-11-25
Self-similar space-filling bearings have been proposed some time ago as models for the motion of tectonic plates and appearance of seismic gaps. These models have two features which, however, seem unrealistic, namely, high symmetry in the arrangement of the particles, and lack of a lower cutoff in the size of the particles. In this work, an algorithm for generating random bearings in both two and three dimensions is presented. Introducing a lower cutoff for the sizes of the particles, the instabilities of the bearing under an external force such as gravity, are studied. PMID:16384225
Random Matrix Theory and Econophysics
NASA Astrophysics Data System (ADS)
Rosenow, Bernd
2000-03-01
Random Matrix Theory (RMT) [1] is used in many branches of physics as a ``zero information hypothesis''. It describes generic behavior of different classes of systems, while deviations from its universal predictions allow to identify system specific properties. We use methods of RMT to analyze the cross-correlation matrix C of stock price changes [2] of the largest 1000 US companies. In addition to its scientific interest, the study of correlations between the returns of different stocks is also of practical relevance in quantifying the risk of a given stock portfolio. We find [3,4] that the statistics of most of the eigenvalues of the spectrum of C agree with the predictions of RMT, while there are deviations for some of the largest eigenvalues. We interpret these deviations as a system specific property, e.g. containing genuine information about correlations in the stock market. We demonstrate that C shares universal properties with the Gaussian orthogonal ensemble of random matrices. Furthermore, we analyze the eigenvectors of C through their inverse participation ratio and find eigenvectors with large ratios at both edges of the eigenvalue spectrum - a situation reminiscent of localization theory results. This work was done in collaboration with V. Plerou, P. Gopikrishnan, T. Guhr, L.A.N. Amaral, and H.E Stanley and is related to recent work of Laloux et al.. 1. T. Guhr, A. Müller Groeling, and H.A. Weidenmüller, ``Random Matrix Theories in Quantum Physics: Common Concepts'', Phys. Rep. 299, 190 (1998). 2. See, e.g. R.N. Mantegna and H.E. Stanley, Econophysics: Correlations and Complexity in Finance (Cambridge University Press, Cambridge, England, 1999). 3. V. Plerou, P. Gopikrishnan, B. Rosenow, L.A.N. Amaral, and H.E. Stanley, ``Universal and Nonuniversal Properties of Cross Correlations in Financial Time Series'', Phys. Rev. Lett. 83, 1471 (1999). 4. V. Plerou, P. Gopikrishnan, T. Guhr, B. Rosenow, L.A.N. Amaral, and H.E. Stanley, ``Random Matrix Theory
RANDOM FORESTS FOR PHOTOMETRIC REDSHIFTS
Carliles, Samuel; Szalay, Alexander S.; Budavari, Tamas; Heinis, Sebastien; Priebe, Carey
2010-03-20
The main challenge today in photometric redshift estimation is not in the accuracy but in understanding the uncertainties. We introduce an empirical method based on Random Forests to address these issues. The training algorithm builds a set of optimal decision trees on subsets of the available spectroscopic sample, which provide independent constraints on the redshift of each galaxy. The combined forest estimates have intriguing statistical properties, notable among which are Gaussian errors. We demonstrate the power of our approach on multi-color measurements of the Sloan Digital Sky Survey.
Randomized selection on the GPU
Monroe, Laura Marie; Wendelberger, Joanne R; Michalak, Sarah E
2011-01-13
We implement here a fast and memory-sparing probabilistic top N selection algorithm on the GPU. To our knowledge, this is the first direct selection in the literature for the GPU. The algorithm proceeds via a probabilistic-guess-and-chcck process searching for the Nth element. It always gives a correct result and always terminates. The use of randomization reduces the amount of data that needs heavy processing, and so reduces the average time required for the algorithm. Probabilistic Las Vegas algorithms of this kind are a form of stochastic optimization and can be well suited to more general parallel processors with limited amounts of fast memory.
Quantum random walks without walking
Manouchehri, K.; Wang, J. B.
2009-12-15
Quantum random walks have received much interest due to their nonintuitive dynamics, which may hold the key to a new generation of quantum algorithms. What remains a major challenge is a physical realization that is experimentally viable and not limited to special connectivity criteria. We present a scheme for walking on arbitrarily complex graphs, which can be realized using a variety of quantum systems such as a Bose-Einstein condensate trapped inside an optical lattice. This scheme is particularly elegant since the walker is not required to physically step between the nodes; only flipping coins is sufficient.
Stipčević, Mario
2016-03-01
In this work, a new type of elementary logic circuit, named random flip-flop (RFF), is proposed, experimentally realized, and studied. Unlike conventional Boolean logic circuits whose action is deterministic and highly reproducible, the action of a RFF is intentionally made maximally unpredictable and, in the proposed realization, derived from a fundamentally random process of emission and detection of light quanta. We demonstrate novel applications of RFF in randomness preserving frequency division, random frequency synthesis, and random number generation. Possible usages of these applications in the information and communication technology, cryptographic hardware, and testing equipment are discussed. PMID:27036825
NASA Astrophysics Data System (ADS)
Stipčević, Mario
2016-03-01
In this work, a new type of elementary logic circuit, named random flip-flop (RFF), is proposed, experimentally realized, and studied. Unlike conventional Boolean logic circuits whose action is deterministic and highly reproducible, the action of a RFF is intentionally made maximally unpredictable and, in the proposed realization, derived from a fundamentally random process of emission and detection of light quanta. We demonstrate novel applications of RFF in randomness preserving frequency division, random frequency synthesis, and random number generation. Possible usages of these applications in the information and communication technology, cryptographic hardware, and testing equipment are discussed.
49 CFR 655.45 - Random testing.
Code of Federal Regulations, 2010 CFR
2010-10-01
... is notified of selection for random drug or random alcohol testing proceed to the test site..., DEPARTMENT OF TRANSPORTATION PREVENTION OF ALCOHOL MISUSE AND PROHIBITED DRUG USE IN TRANSIT OPERATIONS Types... section, the minimum annual percentage rate for random drug testing shall be 50 percent of...
Randomness in Sequence Evolution Increases over Time
Wang, Guangyu; Sun, Shixiang; Zhang, Zhang
2016-01-01
The second law of thermodynamics states that entropy, as a measure of randomness in a system, increases over time. Although studies have investigated biological sequence randomness from different aspects, it remains unknown whether sequence randomness changes over time and whether this change consists with the second law of thermodynamics. To capture the dynamics of randomness in molecular sequence evolution, here we detect sequence randomness based on a collection of eight statistical random tests and investigate the randomness variation of coding sequences with an application to Escherichia coli. Given that core/essential genes are more ancient than specific/non-essential genes, our results clearly show that core/essential genes are more random than specific/non-essential genes and accordingly indicate that sequence randomness indeed increases over time, consistent well with the second law of thermodynamics. We further find that an increase in sequence randomness leads to increasing randomness of GC content and longer sequence length. Taken together, our study presents an important finding, for the first time, that sequence randomness increases over time, which may provide profound insights for unveiling the underlying mechanisms of molecular sequence evolution. PMID:27224236
Cluster randomization: a trap for the unwary.
Underwood, M; Barnett, A; Hajioff, S
1998-01-01
Controlled trials that randomize by practice can provide robust evidence to inform patient care. However, compared with randomizing by each individual patient, this approach may have substantial implications for sample size calculations and the interpretation of results. An increased awareness of these effects will improve the quality of research based on randomization by practice. PMID:9624757
Source-Independent Quantum Random Number Generation
NASA Astrophysics Data System (ADS)
Cao, Zhu; Zhou, Hongyi; Yuan, Xiao; Ma, Xiongfeng
2016-01-01
Quantum random number generators can provide genuine randomness by appealing to the fundamental principles of quantum mechanics. In general, a physical generator contains two parts—a randomness source and its readout. The source is essential to the quality of the resulting random numbers; hence, it needs to be carefully calibrated and modeled to achieve information-theoretical provable randomness. However, in practice, the source is a complicated physical system, such as a light source or an atomic ensemble, and any deviations in the real-life implementation from the theoretical model may affect the randomness of the output. To close this gap, we propose a source-independent scheme for quantum random number generation in which output randomness can be certified, even when the source is uncharacterized and untrusted. In our randomness analysis, we make no assumptions about the dimension of the source. For instance, multiphoton emissions are allowed in optical implementations. Our analysis takes into account the finite-key effect with the composable security definition. In the limit of large data size, the length of the input random seed is exponentially small compared to that of the output random bit. In addition, by modifying a quantum key distribution system, we experimentally demonstrate our scheme and achieve a randomness generation rate of over 5 ×103 bit /s .
Randomness in Sequence Evolution Increases over Time.
Wang, Guangyu; Sun, Shixiang; Zhang, Zhang
2016-01-01
The second law of thermodynamics states that entropy, as a measure of randomness in a system, increases over time. Although studies have investigated biological sequence randomness from different aspects, it remains unknown whether sequence randomness changes over time and whether this change consists with the second law of thermodynamics. To capture the dynamics of randomness in molecular sequence evolution, here we detect sequence randomness based on a collection of eight statistical random tests and investigate the randomness variation of coding sequences with an application to Escherichia coli. Given that core/essential genes are more ancient than specific/non-essential genes, our results clearly show that core/essential genes are more random than specific/non-essential genes and accordingly indicate that sequence randomness indeed increases over time, consistent well with the second law of thermodynamics. We further find that an increase in sequence randomness leads to increasing randomness of GC content and longer sequence length. Taken together, our study presents an important finding, for the first time, that sequence randomness increases over time, which may provide profound insights for unveiling the underlying mechanisms of molecular sequence evolution. PMID:27224236
Random sources for cusped beams.
Li, Jia; Wang, Fei; Korotkova, Olga
2016-08-01
We introduce two novel classes of partially coherent sources whose degrees of coherence are described by the rectangular Lorentz-correlated Schell-model (LSM) and rectangular fractional multi-Gaussian-correlated Schell-model (FMGSM) functions. Based on the generalized Collins formula, analytical expressions are derived for the spectral density distributions of these beams propagating through a stigmatic ABCD optical system. It is shown that beams belonging to both classes form the spectral density apex that is much higher and sharper than that generated by the Gaussian Schell-model (GSM) beam with a comparable coherence state. We experimentally generate these beams by using a nematic, transmissive spatial light modulator (SLM) that serves as a random phase screen controlled by a computer. The experimental data is consistent with theoretical predictions. Moreover, it is illustrated that the FMGSM beam generated in our experiments has a better focusing capacity than the GSM beam with the same coherence state. The applications that can potentially benefit from the use of novel beams range from material surface processing, to communications and sensing through random media. PMID:27505746
Random Tensors and Planted Cliques
NASA Astrophysics Data System (ADS)
Brubaker, S. Charles; Vempala, Santosh S.
The r-parity tensor of a graph is a generalization of the adjacency matrix, where the tensor’s entries denote the parity of the number of edges in subgraphs induced by r distinct vertices. For r = 2, it is the adjacency matrix with 1’s for edges and - 1’s for nonedges. It is well-known that the 2-norm of the adjacency matrix of a random graph is O(sqrt{n}). Here we show that the 2-norm of the r-parity tensor is at most f(r)sqrt{n}log^{O(r)}n, answering a question of Frieze and Kannan [1] who proved this for r = 3. As a consequence, we get a tight connection between the planted clique problem and the problem of finding a vector that approximates the 2-norm of the r-parity tensor of a random graph. Our proof method is based on an inductive application of concentration of measure.
The wasteland of random supergravities
NASA Astrophysics Data System (ADS)
Marsh, David; McAllister, Liam; Wrase, Timm
2012-03-01
We show that in a general {N} = {1} supergravity with N ≫ 1 scalar fields, an exponentially small fraction of the de Sitter critical points are metastable vacua. Taking the superpotential and Kähler potential to be random functions, we construct a random matrix model for the Hessian matrix, which is well-approximated by the sum of a Wigner matrix and two Wishart matrices. We compute the eigenvalue spectrum analytically from the free convolution of the constituent spectra and find that in typical configurations, a significant fraction of the eigenvalues are negative. Building on the Tracy-Widom law governing fluctuations of extreme eigenvalues, we determine the probability P of a large fluctuation in which all the eigenvalues become positive. Strong eigenvalue repulsion makes this extremely unlikely: we find P ∝ exp(- c N p ), with c, p being constants. For generic critical points we find p ≈ 1 .5, while for approximately-supersymmetric critical points, p ≈ 1 .3. Our results have significant implications for the counting of de Sitter vacua in string theory, but the number of vacua remains vast.
Parabolic Anderson Model in a Dynamic Random Environment: Random Conductances
NASA Astrophysics Data System (ADS)
Erhard, D.; den Hollander, F.; Maillard, G.
2016-06-01
The parabolic Anderson model is defined as the partial differential equation ∂ u( x, t)/ ∂ t = κ Δ u( x, t) + ξ( x, t) u( x, t), x ∈ ℤ d , t ≥ 0, where κ ∈ [0, ∞) is the diffusion constant, Δ is the discrete Laplacian, and ξ is a dynamic random environment that drives the equation. The initial condition u( x, 0) = u 0( x), x ∈ ℤ d , is typically taken to be non-negative and bounded. The solution of the parabolic Anderson equation describes the evolution of a field of particles performing independent simple random walks with binary branching: particles jump at rate 2 d κ, split into two at rate ξ ∨ 0, and die at rate (- ξ) ∨ 0. In earlier work we looked at the Lyapunov exponents λ p(κ ) = limlimits _{tto ∞} 1/t log {E} ([u(0,t)]p)^{1/p}, quad p in {N} , qquad λ 0(κ ) = limlimits _{tto ∞} 1/2 log u(0,t). For the former we derived quantitative results on the κ-dependence for four choices of ξ : space-time white noise, independent simple random walks, the exclusion process and the voter model. For the latter we obtained qualitative results under certain space-time mixing conditions on ξ. In the present paper we investigate what happens when κΔ is replaced by Δ𝓚, where 𝓚 = {𝓚( x, y) : x, y ∈ ℤ d , x ˜ y} is a collection of random conductances between neighbouring sites replacing the constant conductances κ in the homogeneous model. We show that the associated annealed Lyapunov exponents λ p (𝓚), p ∈ ℕ, are given by the formula λ p({K} ) = {sup} {λ p(κ ) : κ in {Supp} ({K} )}, where, for a fixed realisation of 𝓚, Supp(𝓚) is the set of values taken by the 𝓚-field. We also show that for the associated quenched Lyapunov exponent λ 0(𝓚) this formula only provides a lower bound, and we conjecture that an upper bound holds when Supp(𝓚) is replaced by its convex hull. Our proof is valid for three classes of reversible ξ, and for all 𝓚
Truly random number generation: an example
NASA Astrophysics Data System (ADS)
Frauchiger, Daniela; Renner, Renato
2013-10-01
Randomness is crucial for a variety of applications, ranging from gambling to computer simulations, and from cryptography to statistics. However, many of the currently used methods for generating randomness do not meet the criteria that are necessary for these applications to work properly and safely. A common problem is that a sequence of numbers may look random but nevertheless not be truly random. In fact, the sequence may pass all standard statistical tests and yet be perfectly predictable. This renders it useless for many applications. For example, in cryptography, the predictability of a "andomly" chosen password is obviously undesirable. Here, we review a recently developed approach to generating true | and hence unpredictable | randomness.
On fatigue crack growth under random loading
NASA Astrophysics Data System (ADS)
Zhu, W. Q.; Lin, Y. K.; Lei, Y.
1992-09-01
A probabilistic analysis of the fatigue crack growth, fatigue life and reliability of a structural or mechanical component is presented on the basis of fracture mechanics and theory of random processes. The material resistance to fatigue crack growth and the time-history of the stress are assumed to be random. Analytical expressions are obtained for the special case in which the random stress is a stationary narrow-band Gaussian random process, and a randomized Paris-Erdogan law is applicable. As an example, the analytical method is applied to a plate with a central crack, and the results are compared with those obtained from digital Monte Carlo simulations.
Subramanian, Leena; Morris, Monica Busse; Brosnan, Meadhbh; Turner, Duncan L.; Morris, Huw R.; Linden, David E. J.
2016-01-01
Objective: Real-time functional magnetic resonance imaging (rt-fMRI) neurofeedback (NF) uses feedback of the patient’s own brain activity to self-regulate brain networks which in turn could lead to a change in behavior and clinical symptoms. The objective was to determine the effect of NF and motor training (MOT) alone on motor and non-motor functions in Parkinson’s Disease (PD) in a 10-week small Phase I randomized controlled trial. Methods: Thirty patients with Parkinson’s disease (PD; Hoehn and Yahr I-III) and no significant comorbidity took part in the trial with random allocation to two groups. Group 1 (NF: 15 patients) received rt-fMRI-NF with MOT. Group 2 (MOT: 15 patients) received MOT alone. The primary outcome measure was the Movement Disorder Society—Unified PD Rating Scale-Motor scale (MDS-UPDRS-MS), administered pre- and post-intervention “off-medication”. The secondary outcome measures were the “on-medication” MDS-UPDRS, the PD Questionnaire-39, and quantitative motor assessments after 4 and 10 weeks. Results: Patients in the NF group were able to upregulate activity in the supplementary motor area (SMA) by using motor imagery. They improved by an average of 4.5 points on the MDS-UPDRS-MS in the “off-medication” state (95% confidence interval: −2.5 to −6.6), whereas the MOT group improved only by 1.9 points (95% confidence interval +3.2 to −6.8). The improvement in the intervention group meets the minimal clinically important difference which is also on par with other non-invasive therapies such as repetitive Transcranial Magnetic Stimulation (rTMS). However, the improvement did not differ significantly between the groups. No adverse events were reported in either group. Interpretation: This Phase I study suggests that NF combined with MOT is safe and improves motor symptoms immediately after treatment, but larger trials are needed to explore its superiority over active control conditions. PMID:27375451
Tornøe, Birte; Andersen, Lars L; Skotte, Jørgen H; Jensen, Rigmor; Jensen, Claus; Madsen, Bjarne K; Gard, Gunvor; Skov, Liselotte; Hallström, Inger
2016-01-01
Background Childhood tension-type headache (TTH) is a prevalent and debilitating condition for the child and family. Low-cost nonpharmacological treatments are usually the first choice of professionals and parents. This study examined the outcomes of specific strength training for girls with TTH. Methods Forty-nine girls aged 9–18 years with TTH were randomized to patient education programs with 10 weeks of strength training and compared with those who were counseled by a nurse and physical therapist. Primary outcomes were headache frequency, intensity, and duration; secondary outcomes were neck–shoulder muscle strength, aerobic power, and pericranial tenderness, measured at baseline, after 10 weeks intervention, and at 12 weeks follow-up. Health-related quality of life (HRQOL) questionnaires were assessed at baseline and after 24 months. Results For both groups, headache frequency decreased significantly, P=0.001, as did duration, P=0.022, with no significant between-group differences. The odds of having headache on a random day decreased over the 22 weeks by 0.65 (0.50–0.84) (odds ratio [95% confidence interval]). For both groups, neck extension strength decreased significantly with a decrease in cervicothoracic extension/flexion ratio to 1.7, indicating a positive change in muscle balance. In the training group, shoulder strength increased $10% in 5/20 girls and predicted VO2max increased $15% for 4/20 girls. In the training group, 50% of girls with a headache reduction of $30% had an increase in VO2max >5%. For the counseling group, this was the case for 29%. A 24-month follow-up on HRQOL for the pooled sample revealed statistically significant improvements. Fifty-five percent of the girls reported little to none disability. Conclusion The results indicate that both physical health and HRQOL can be influenced significantly by physical exercise and nurse counseling. More research is needed to examine the relationship between physical exercise, VO2max, and
Modeling Randomness in Judging Rating Scales with a Random-Effects Rating Scale Model
ERIC Educational Resources Information Center
Wang, Wen-Chung; Wilson, Mark; Shih, Ching-Lin
2006-01-01
This study presents the random-effects rating scale model (RE-RSM) which takes into account randomness in the thresholds over persons by treating them as random-effects and adding a random variable for each threshold in the rating scale model (RSM) (Andrich, 1978). The RE-RSM turns out to be a special case of the multidimensional random…
Causal Mediation Analyses for Randomized Trials.
Lynch, Kevin G; Cary, Mark; Gallop, Robert; Ten Have, Thomas R
2008-01-01
In the context of randomized intervention trials, we describe causal methods for analyzing how post-randomization factors constitute the process through which randomized baseline interventions act on outcomes. Traditionally, such mediation analyses have been undertaken with great caution, because they assume that the mediating factor is also randomly assigned to individuals in addition to the randomized baseline intervention (i.e., sequential ignorability). Because the mediating factors are typically not randomized, such analyses are unprotected from unmeasured confounders that may lead to biased inference. We review several causal approaches that attempt to reduce such bias without assuming that the mediating factor is randomized. However, these causal approaches require certain interaction assumptions that may be assessed if there is enough treatment heterogeneity with respect to the mediator. We describe available estimation procedures in the context of several examples from the literature and provide resources for software code. PMID:19484136
Certifying Unpredictable Randomness from Quantum Nonlocality
NASA Astrophysics Data System (ADS)
Bierhorst, Peter
2015-03-01
A device-independent quantum randomness protocol takes an initial random seed as input and then expands it in to a longer random string. It has been proven that if the initial random seed is trusted to be unpredictable, then the longer output string can also be certified to be unpredictable by an experimental violation of Bell's inequality. It has furthermore been argued that the initial random seed may not need to be truly unpredictable, but only uncorrelated to specific parts of the Bell experiment. In this work, we demonstrate rigorously that this is indeed true, under assumptions related to ``no superdeterminism/no conspiracy'' concepts along with the no-signaling assumption. So if we assume that superluminal signaling is impossible, then a loophole-free test of Bell's inequality would be able to generate provably unpredictable randomness from an input source of (potentially predictable) classical randomness.
Security of practical private randomness generation
NASA Astrophysics Data System (ADS)
Pironio, Stefano; Massar, Serge
2013-01-01
Measurements on entangled quantum systems necessarily yield outcomes that are intrinsically unpredictable if they violate a Bell inequality. This property can be used to generate certified randomness in a device-independent way, i.e., without making detailed assumptions about the internal working of the quantum devices used to generate the random numbers. Furthermore these numbers are also private; i.e., they appear random not only to the user but also to any adversary that might possess a perfect description of the devices. Since this process requires a small initial random seed to sample the behavior of the quantum devices and to extract uniform randomness from the raw outputs of the devices, one usually speaks of device-independent randomness expansion. The purpose of this paper is twofold. First, we point out that in most real, practical situations, where the concept of device independence is used as a protection against unintentional flaws or failures of the quantum apparatuses, it is sufficient to show that the generated string is random with respect to an adversary that holds only classical side information; i.e., proving randomness against quantum side information is not necessary. Furthermore, the initial random seed does not need to be private with respect to the adversary, provided that it is generated in a way that is independent from the measured systems. The devices, however, will generate cryptographically secure randomness that cannot be predicted by the adversary, and thus one can, given access to free public randomness, talk about private randomness generation. The theoretical tools to quantify the generated randomness according to these criteria were already introduced in S. Pironio [Nature (London)NATUAS0028-083610.1038/nature09008 464, 1021 (2010)], but the final results were improperly formulated. The second aim of this paper is to correct this inaccurate formulation and therefore lay out a precise theoretical framework for practical device
Postprocessing for quantum random-number generators: Entropy evaluation and randomness extraction
NASA Astrophysics Data System (ADS)
Ma, Xiongfeng; Xu, Feihu; Xu, He; Tan, Xiaoqing; Qi, Bing; Lo, Hoi-Kwong
2013-06-01
Quantum random-number generators (QRNGs) can offer a means to generate information-theoretically provable random numbers, in principle. In practice, unfortunately, the quantum randomness is inevitably mixed with classical randomness due to classical noises. To distill this quantum randomness, one needs to quantify the randomness of the source and apply a randomness extractor. Here, we propose a generic framework for evaluating quantum randomness of real-life QRNGs by min-entropy, and apply it to two different existing quantum random-number systems in the literature. Moreover, we provide a guideline of QRNG data postprocessing for which we implement two information-theoretically provable randomness extractors: Toeplitz-hashing extractor and Trevisan's extractor.
Technology Transfer Automated Retrieval System (TEKTRAN)
Animal studies have documented that, compared with glucose, dietary fructose promotes dyslipidemia and insulin resistance. Experimental evidence that fructose consumption in humans promotes dyslipidemia and insulin resistance compared with glucose consumption has been equivocal. We tested the hypoth...
Váczi, Márk; Nagy, Szilvia A; Kőszegi, Tamás; Ambrus, Míra; Bogner, Péter; Perlaki, Gábor; Orsi, Gergely; Tóth, Katalin; Hortobágyi, Tibor
2014-10-01
The growth promoting effects of eccentric (ECC) contractions are well documented but it is unknown if the rate of stretch per se plays a role in such muscular responses in healthy aging human skeletal muscle. We tested the hypothesis that exercise training of the quadriceps muscle with low rate ECC and high rate ECC contractions in the form of stretch-shortening cycles (SSCs) but at equal total mechanical work would produce rate-specific adaptations in healthy old males age 60-70. Both training programs produced similar improvements in maximal voluntary isometric (6%) and ECC torque (23%) and stretch-shortening cycle function (reduced contraction duration [24%] and enhanced elastic energy storage [12%]) (p<0.05). The rate of torque development increased 30% only after SSC exercise (p<0.05). Resting testosterone and cortisol levels were unchanged but after each program the acute exercise-induced cortisol levels were 12-15% lower (p<0.05). Both programs increased quadriceps size 2.5% (p<0.05). It is concluded that both ECC and SSC exercise training produces favorable adaptations in healthy old males' quadriceps muscle. Although the rate of muscle tension during the SSC vs. ECC contractions was about 4-fold greater, the total mechanical work seems to regulate the hypetrophic, hormonal, and most of the mechanical adaptations. However, SSC exercise was uniquely effective in improving a key deficiency of aging muscle, i.e., its ability to produce force rapidly. PMID:25064038
Flow Through Randomly Curved Manifolds
Mendoza, M.; Succi, S.; Herrmann, H. J.
2013-01-01
We present a computational study of the transport properties of campylotic (intrinsically curved) media. It is found that the relation between the flow through a campylotic media, consisting of randomly located curvature perturbations, and the average Ricci scalar of the system, exhibits two distinct functional expressions, depending on whether the typical spatial extent of the curvature perturbation lies above or below the critical value maximizing the overall scalar of curvature. Furthermore, the flow through such systems as a function of the number of curvature perturbations is found to present a sublinear behavior for large concentrations, due to the interference between curvature perturbations leading to an overall less curved space. We have also characterized the flux through such media as a function of the local Reynolds number and the scale of interaction between impurities. For the purpose of this study, we have also developed and validated a new lattice Boltzmann model. PMID:24173367
Clique percolation in random networks.
Derényi, Imre; Palla, Gergely; Vicsek, Tamás
2005-04-29
The notion of k-clique percolation in random graphs is introduced, where k is the size of the complete subgraphs whose large scale organizations are analytically and numerically investigated. For the Erdos-Rényi graph of N vertices we obtain that the percolation transition of k-cliques takes place when the probability of two vertices being connected by an edge reaches the threshold p(c) (k) = [(k - 1)N](-1/(k - 1)). At the transition point the scaling of the giant component with N is highly nontrivial and depends on k. We discuss why clique percolation is a novel and efficient approach to the identification of overlapping communities in large real networks. PMID:15904198
Ergodic theory, randomness, and "chaos".
Ornstein, D S
1989-01-13
Ergodic theory is the theory of the long-term statistical behavior of dynamical systems. The baker's transformation is an object of ergodic theory that provides a paradigm for the possibility of deterministic chaos. It can now be shown that this connection is more than an analogy and that at some level of abstraction a large number of systems governed by Newton's laws are the same as the baker's transformation. Going to this level of abstraction helps to organize the possible kinds of random behavior. The theory also gives new concrete results. For example, one can show that the same process could be produced by a mechanism governed by Newton's laws or by a mechanism governed by coin tossing. It also gives a statistical analog of structural stability. PMID:17747421
Extremal properties of random trees
NASA Astrophysics Data System (ADS)
Ben-Naim, E.; Krapivsky, P. L.; Majumdar, Satya N.
2001-09-01
We investigate extremal statistical properties such as the maximal and the minimal heights of randomly generated binary trees. By analyzing the master evolution equations we show that the cumulative distribution of extremal heights approaches a traveling wave form. The wave front in the minimal case is governed by the small-extremal-height tail of the distribution, and conversely, the front in the maximal case is governed by the large-extremal-height tail of the distribution. We determine several statistical characteristics of the extremal height distribution analytically. In particular, the expected minimal and maximal heights grow logarithmically with the tree size, N, hmin~vmin ln N, and hmax~vmax ln N, with vmin=0.373365... and vmax=4.31107..., respectively. Corrections to this asymptotic behavior are of order O(ln ln N).
Clique Percolation in Random Networks
NASA Astrophysics Data System (ADS)
Derényi, Imre; Palla, Gergely; Vicsek, Tamás
2005-04-01
The notion of k-clique percolation in random graphs is introduced, where k is the size of the complete subgraphs whose large scale organizations are analytically and numerically investigated. For the Erdős-Rényi graph of N vertices we obtain that the percolation transition of k-cliques takes place when the probability of two vertices being connected by an edge reaches the threshold pc(k)=[(k-1)N]-1/(k-1). At the transition point the scaling of the giant component with N is highly nontrivial and depends on k. We discuss why clique percolation is a novel and efficient approach to the identification of overlapping communities in large real networks.
Structure of random bidisperse foam.
Reinelt, Douglas A.; van Swol, Frank B.; Kraynik, Andrew Michael
2005-02-01
The Surface Evolver was used to compute the equilibrium microstructure of random soap foams with bidisperse cell-size distributions and to evaluate topological and geometric properties of the foams and individual cells. The simulations agree with the experimental data of Matzke and Nestler for the probability {rho}(F) of finding cells with F faces and its dependence on the fraction of large cells. The simulations also agree with the theory for isotropic Plateau polyhedra (IPP), which describes the F-dependence of cell geometric properties, such as surface area, edge length, and mean curvature (diffusive growth rate); this is consistent with results for polydisperse foams. Cell surface areas are about 10% greater than spheres of equal volume, which leads to a simple but accurate relation for the surface free energy density of foams. The Aboav-Weaire law is not valid for bidisperse foams.
Random vibration of compliant wall
NASA Technical Reports Server (NTRS)
Yang, J.-N.; Heller, R. A.
1976-01-01
The paper is concerned with the realistic case of two-dimensional random motion of a membrane with bending stiffness supported on a viscoelastic spring substrate and on an elastic base plate under both subsonic and supersonic boundary layer turbulence. The cross-power spectral density of surface displacements is solved in terms of design variables of the compliant wall - such as the dimensions and material properties of the membrane (Mylar), substrate (PVC foam), and panel (aluminum) - so that a sensitivity analysis can be made to examine the influence of each design variable on the surface response statistics. Three numerical examples typical of compliant wall design are worked out and their response statistics in relation to wave drag and roughness drag are assessed. The results can serve as a guideline for experimental investigation of the drag reduction concept through the use of a compliant wall.
Lower bounds for randomized Exclusive Write PRAMs
MacKenzie, P.D.
1995-05-02
In this paper we study the question: How useful is randomization in speeding up Exclusive Write PRAM computations? Our results give further evidence that randomization is of limited use in these types of computations. First we examine a compaction problem on both the CREW and EREW PRAM models, and we present randomized lower bounds which match the best deterministic lower bounds known. (For the CREW PRAM model, the lower bound is asymptotically optimal.) These are the first non-trivial randomized lower bounds known for the compaction problem on these models. We show that our lower bounds also apply to the problem of approximate compaction. Next we examine the problem of computing boolean functions on the CREW PRAM model, and we present a randomized lower bound, which improves on the previous best randomized lower bound for many boolean functions, including the OR function. (The previous lower bounds for these functions were asymptotically optimal, but we improve the constant multiplicative factor.) We also give an alternate proof for the randomized lower bound on PARITY, which was already optimal to within a constant additive factor. Lastly, we give a randomized lower bound for integer merging on an EREW PRAM which matches the best deterministic lower bound known. In all our proofs, we use the Random Adversary method, which has previously only been used for proving lower bounds on models with Concurrent Write capabilities. Thus this paper also serves to illustrate the power and generality of this method for proving parallel randomized lower bounds.
Error Threshold of Fully Random Eigen Model
NASA Astrophysics Data System (ADS)
Li, Duo-Fang; Cao, Tian-Guang; Geng, Jin-Peng; Qiao, Li-Hua; Gu, Jian-Zhong; Zhan, Yong
2015-01-01
Species evolution is essentially a random process of interaction between biological populations and their environments. As a result, some physical parameters in evolution models are subject to statistical fluctuations. In this work, two important parameters in the Eigen model, the fitness and mutation rate, are treated as Gaussian distributed random variables simultaneously to examine the property of the error threshold. Numerical simulation results show that the error threshold in the fully random model appears as a crossover region instead of a phase transition point, and as the fluctuation strength increases the crossover region becomes smoother and smoother. Furthermore, it is shown that the randomization of the mutation rate plays a dominant role in changing the error threshold in the fully random model, which is consistent with the existing experimental data. The implication of the threshold change due to the randomization for antiviral strategies is discussed.
The MCNP5 Random number generator
Brown, F. B.; Nagaya, Y.
2002-01-01
MCNP and other Monte Carlo particle transport codes use random number generators to produce random variates from a uniform distribution on the interval. These random variates are then used in subsequent sampling from probability distributions to simulate the physical behavior of particles during the transport process. This paper describes the new random number generator developed for MCNP Version 5. The new generator will optionally preserve the exact random sequence of previous versions and is entirely conformant to the Fortran-90 standard, hence completely portable. In addition, skip-ahead algorithms have been implemented to efficiently initialize the generator for new histories, a capability that greatly simplifies parallel algorithms. Further, the precision of the generator has been increased, extending the period by a factor of 10{sup 5}. Finally, the new generator has been subjected to 3 different sets of rigorous and extensive statistical tests to verify that it produces a sufficiently random sequence.
2012-01-01
Background Luteal support with progesterone is necessary for successful implantation of the embryo following egg collection and embryo transfer in an in-vitro fertilization (IVF) cycle. Progesterone has been used for as little as 2 weeks and for as long as 12 weeks of gestation. The optimal length of treatment is unresolved at present and it remains unclear how long to treat women receiving luteal supplementation. Design The trial is a prospective, randomized, double-blind, placebo-controlled trial to investigate the effect of the duration of luteal support with progesterone in IVF cycles. Following 2 weeks standard treatment and a positive biochemical pregnancy test, this randomized control trial will allocate women to a supplementary 8 weeks treatment with vaginal progesterone or 8 weeks placebo. Further studies would be required to investigate whether additional supplementation with progesterone is beneficial in early pregnancy. Discussion Currently at the Hewitt Centre, approximately 32.5% of women have a positive biochemical pregnancy test 2 weeks after embryo transfer. It is this population that is eligible for trial entry and randomization. Once the patient has confirmed a positive urinary pregnancy test they will be invited to join the trial. Once the consent form has been completed by the patient a trial prescription sheet will be sent to pharmacy with a stated collection time. The patient can then be randomized and the drugs dispensed according to pharmacy protocol. A blood sample will then be drawn for measurement of baseline hormone levels (progesterone, estradiol, free beta-human chorionic gonadotrophin, pregnancy-associated plasma protein-A, Activin A, Inhibin A and Inhibin B). The primary outcome measure is the proportion of all randomized women that continue successfully to a viable pregnancy (at least one fetus with fetal heart rate >100 beats/minute) on transabdominal/transvaginal ultrasound at 10 weeks post embryo transfer/12 weeks gestation
Some physical applications of random hierarchical matrices
Avetisov, V. A.; Bikulov, A. Kh.; Vasilyev, O. A.; Nechaev, S. K.; Chertovich, A. V.
2009-09-15
The investigation of spectral properties of random block-hierarchical matrices as applied to dynamic and structural characteristics of complex hierarchical systems with disorder is proposed for the first time. Peculiarities of dynamics on random ultrametric energy landscapes are discussed and the statistical properties of scale-free and polyscale (depending on the topological characteristics under investigation) random hierarchical networks (graphs) obtained by multiple mapping are considered.
Private randomness expansion with untrusted devices
NASA Astrophysics Data System (ADS)
Colbeck, Roger; Kent, Adrian
2011-03-01
Randomness is an important resource for many applications, from gambling to secure communication. However, guaranteeing that the output from a candidate random source could not have been predicted by an outside party is a challenging task, and many supposedly random sources used today provide no such guarantee. Quantum solutions to this problem exist, for example a device which internally sends a photon through a beamsplitter and observes on which side it emerges, but, presently, such solutions require the user to trust the internal workings of the device. Here, we seek to go beyond this limitation by asking whether randomness can be generated using untrusted devices—even ones created by an adversarial agent—while providing a guarantee that no outside party (including the agent) can predict it. Since this is easily seen to be impossible unless the user has an initially private random string, the task we investigate here is private randomness expansion. We introduce a protocol for private randomness expansion with untrusted devices which is designed to take as input an initially private random string and produce as output a longer private random string. We point out that private randomness expansion protocols are generally vulnerable to attacks that can render the initial string partially insecure, even though that string is used only inside a secure laboratory; our protocol is designed to remove this previously unconsidered vulnerability by privacy amplification. We also discuss extensions of our protocol designed to generate an arbitrarily long random string from a finite initially private random string. The security of these protocols against the most general attacks is left as an open question.
Disturbing the random-energy landscape
NASA Astrophysics Data System (ADS)
Halpin-Healy, Timothy; Herbert, Devorah
1993-09-01
We examine the effects of correlated perturbations upon globally optimal paths through a random-energy landscape. Motivated by Zhang's early numerical investigations [Phys. Rev. Lett. 59, 2125 (1987)] into ground-state instabilities of disordered systems, as well as the work of Shapir [Phys. Rev. Lett. 66, 1473 (1991)] on random perturbations of roughened manifolds, we have studied the specific case of random bond interfaces unsettled by small random fields, confirming recent predictions for the instability exponents. Implications for disordered magnets and growing surfaces are discussed.
The Theory of Random Laser Systems
Xunya Jiang
2002-06-27
Studies of random laser systems are a new direction with promising potential applications and theoretical interest. The research is based on the theories of localization and laser physics. So far, the research shows that there are random lasing modes inside the systems which is quite different from the common laser systems. From the properties of the random lasing modes, they can understand the phenomena observed in the experiments, such as multi-peak and anisotropic spectrum, lasing mode number saturation, mode competition and dynamic processes, etc. To summarize, this dissertation has contributed the following in the study of random laser systems: (1) by comparing the Lamb theory with the Letokhov theory, the general formulas of the threshold length or gain of random laser systems were obtained; (2) they pointed out the vital weakness of previous time-independent methods in random laser research; (3) a new model which includes the FDTD method and the semi-classical laser theory. The solutions of this model provided an explanation of the experimental results of multi-peak and anisotropic emission spectra, predicted the saturation of lasing modes number and the length of localized lasing modes; (4) theoretical (Lamb theory) and numerical (FDTD and transfer-matrix calculation) studies of the origin of localized lasing modes in the random laser systems; and (5) proposal of using random lasing modes as a new path to study wave localization in random systems and prediction of the lasing threshold discontinuity at mobility edge.
True random numbers from amplified quantum vacuum.
Jofre, M; Curty, M; Steinlechner, F; Anzolin, G; Torres, J P; Mitchell, M W; Pruneri, V
2011-10-10
Random numbers are essential for applications ranging from secure communications to numerical simulation and quantitative finance. Algorithms can rapidly produce pseudo-random outcomes, series of numbers that mimic most properties of true random numbers while quantum random number generators (QRNGs) exploit intrinsic quantum randomness to produce true random numbers. Single-photon QRNGs are conceptually simple but produce few random bits per detection. In contrast, vacuum fluctuations are a vast resource for QRNGs: they are broad-band and thus can encode many random bits per second. Direct recording of vacuum fluctuations is possible, but requires shot-noise-limited detectors, at the cost of bandwidth. We demonstrate efficient conversion of vacuum fluctuations to true random bits using optical amplification of vacuum and interferometry. Using commercially-available optical components we demonstrate a QRNG at a bit rate of 1.11 Gbps. The proposed scheme has the potential to be extended to 10 Gbps and even up to 100 Gbps by taking advantage of high speed modulation sources and detectors for optical fiber telecommunication devices. PMID:21997077
Random packing of spheres in Menger sponge
NASA Astrophysics Data System (ADS)
Cieśla, Michał; Barbasz, Jakub
2013-06-01
Random packing of spheres inside fractal collectors of dimension 2 < d < 3 is studied numerically using Random Sequential Adsorption (RSA) algorithm. The paper focuses mainly on the measurement of random packing saturation limit. Additionally, scaling properties of density autocorrelations in the obtained packing are analyzed. The RSA kinetics coefficients are also measured. Obtained results allow to test phenomenological relation between random packing saturation density and collector dimension. Additionally, performed simulations together with previously obtained results confirm that, in general, the known dimensional relations are obeyed by systems having non-integer dimension, at least for d < 3.
Random packing of spheres in Menger sponge.
Cieśla, Michał; Barbasz, Jakub
2013-06-01
Random packing of spheres inside fractal collectors of dimension 2 < d < 3 is studied numerically using Random Sequential Adsorption (RSA) algorithm. The paper focuses mainly on the measurement of random packing saturation limit. Additionally, scaling properties of density autocorrelations in the obtained packing are analyzed. The RSA kinetics coefficients are also measured. Obtained results allow to test phenomenological relation between random packing saturation density and collector dimension. Additionally, performed simulations together with previously obtained results confirm that, in general, the known dimensional relations are obeyed by systems having non-integer dimension, at least for d < 3. PMID:23758392
Cialdella-Kam, Lynn; Nieman, David C.; Knab, Amy M.; Shanely, R. Andrew; Meaney, Mary Pat; Jin, Fuxia; Sha, Wei; Ghosh, Sujoy
2016-01-01
Flavonoids and fish oils have anti-inflammatory and immune-modulating influences. The purpose of this study was to determine if a mixed flavonoid-fish oil supplement (Q-Mix; 1000 mg quercetin, 400 mg isoquercetin, 120 mg epigallocatechin (EGCG) from green tea extract, 400 mg n3-PUFAs (omega-3 polyunsaturated fatty acid) (220 mg eicosapentaenoic acid (EPA) and 180 mg docosahexaenoic acid (DHA)) from fish oil, 1000 mg vitamin C, 40 mg niacinamide, and 800 µg folic acid) would reduce complications associated with obesity; that is, reduce inflammatory and oxidative stress markers and alter genomic profiles in overweight women. Overweight and obese women (n = 48; age = 40–70 years) were assigned to Q-Mix or placebo groups using randomized double-blinded placebo-controlled procedures. Overnight fasted blood samples were collected at 0 and 10 weeks and analyzed for cytokines, C-reactive protein (CRP), F2-isoprostanes, and whole-blood-derived mRNA, which was assessed using Affymetrix HuGene-1_1 ST arrays. Statistical analysis included two-way ANOVA models for blood analytes and gene expression and pathway and network enrichment methods for gene expression. Plasma levels increased with Q-Mix supplementation by 388% for quercetin, 95% for EPA, 18% for DHA, and 20% for docosapentaenoic acid (DPA). Q-Mix did not alter plasma levels for CRP (p = 0.268), F2-isoprostanes (p = 0.273), and cytokines (p > 0.05). Gene set enrichment analysis revealed upregulation of pathways in Q-Mix vs. placebo related to interferon-induced antiviral mechanism (false discovery rate, FDR < 0.001). Overrepresentation analysis further disclosed an inhibition of phagocytosis-related inflammatory pathways in Q-Mix vs. placebo. Thus, a 10-week Q-Mix supplementation elicited a significant rise in plasma quercetin, EPA, DHA, and DPA, as well as stimulated an antiviral and inflammation whole-blood transcriptomic response in overweight women. PMID:27187447
Cialdella-Kam, Lynn; Nieman, David C; Knab, Amy M; Shanely, R Andrew; Meaney, Mary Pat; Jin, Fuxia; Sha, Wei; Ghosh, Sujoy
2016-01-01
Flavonoids and fish oils have anti-inflammatory and immune-modulating influences. The purpose of this study was to determine if a mixed flavonoid-fish oil supplement (Q-Mix; 1000 mg quercetin, 400 mg isoquercetin, 120 mg epigallocatechin (EGCG) from green tea extract, 400 mg n3-PUFAs (omega-3 polyunsaturated fatty acid) (220 mg eicosapentaenoic acid (EPA) and 180 mg docosahexaenoic acid (DHA)) from fish oil, 1000 mg vitamin C, 40 mg niacinamide, and 800 µg folic acid) would reduce complications associated with obesity; that is, reduce inflammatory and oxidative stress markers and alter genomic profiles in overweight women. Overweight and obese women (n = 48; age = 40-70 years) were assigned to Q-Mix or placebo groups using randomized double-blinded placebo-controlled procedures. Overnight fasted blood samples were collected at 0 and 10 weeks and analyzed for cytokines, C-reactive protein (CRP), F₂-isoprostanes, and whole-blood-derived mRNA, which was assessed using Affymetrix HuGene-1_1 ST arrays. Statistical analysis included two-way ANOVA models for blood analytes and gene expression and pathway and network enrichment methods for gene expression. Plasma levels increased with Q-Mix supplementation by 388% for quercetin, 95% for EPA, 18% for DHA, and 20% for docosapentaenoic acid (DPA). Q-Mix did not alter plasma levels for CRP (p = 0.268), F2-isoprostanes (p = 0.273), and cytokines (p > 0.05). Gene set enrichment analysis revealed upregulation of pathways in Q-Mix vs. placebo related to interferon-induced antiviral mechanism (false discovery rate, FDR < 0.001). Overrepresentation analysis further disclosed an inhibition of phagocytosis-related inflammatory pathways in Q-Mix vs. placebo. Thus, a 10-week Q-Mix supplementation elicited a significant rise in plasma quercetin, EPA, DHA, and DPA, as well as stimulated an antiviral and inflammation whole-blood transcriptomic response in overweight women. PMID:27187447
Rouhani, M H; Kelishadi, R; Hashemipour, M; Esmaillzadeh, A; Surkan, P J; Keshavarz, A; Azadbakht, L
2016-04-01
Although the effects of dietary glycemic index (GI) on insulin resistance are well documented in adults, the complex interaction among glucose intolerance, inflammatory markers, and adipokine concentration has not been well studied, especially among adolescents. We investigated the effect of a low glycemic index (LGI) diet on insulin concentration, fasting blood sugar (FBS), inflammatory markers, and serum adiponectin concentration among healthy obese/overweight adolescent females. In this parallel randomized clinical trial, 2 different diets, an LGI diet and a healthy nutritional recommendation diet (HNRD) with similar macronutrient composition were prescribed to 50 obese and overweight adolescent girls with the same pubertal status. Biochemical markers FBS, serum insulin concentration, high sensitivity C-reactive protein (hs-CRP), interleukin 6 (IL-6), and adiponectin were measured before and after a 10 week intervention. Using an intention-to-treat analysis, data from 50 subjects were analyzed. According to a dietary assessment, GI in the LGI group was 43.22±0.54. While the mean for FBS, serum insulin concentration, the homeostasis model assessment (HOMA), the quantitative insulin sensitivity check index (QUICKI), and adiponectin concentration did not differ significantly within each group, the average hs-CRP and IL-6 decreased significantly in the LGI diet group after the 10 week intervention (p=0.009 and p=0.001; respectively). Comparing percent changes, we found a marginally significant decrease in hs-CRP in the LGI group compared with the HNRD group after adjusting for confounders. Compliance with an LGI diet may have favorable effect on inflammation among overweight and obese adolescent girls. PMID:27065462
Random Copolymer: Gaussian Variational Approach
NASA Astrophysics Data System (ADS)
Moskalenko, A.; Kuznetsov, Yu. A.; Dawson, K. A.
1997-03-01
We study the phase transitions of a random copolymer chain with quenched disorder. We calculate the average over the quenched disorder in replica space and apply a Gaussian variational approach based on a generic quadratic trial Hamiltonian in terms of the correlation functions of monomer Fourier coordinates. This has the advantage that it allows us to incorporate fluctuations of the density, determined self-consistently, and to study collapse, phase separation transitions and the onset of the freezing transition within the same mean field theory. The effective free energy of the system is derived analytically and analyzed numerically in the one-step Parisi scheme. Such quantities as the radius of gyration, end-to-end distance or the average value of the overlap between different replicas are treated as observables and evaluated by introducing appropriate external fields to the Hamiltonian. As a result we obtain the phase diagram in terms of model parameters, scaling for the freezing transition and the dependence of correlation functions on the chain index.
Efficient robust conditional random fields.
Song, Dongjin; Liu, Wei; Zhou, Tianyi; Tao, Dacheng; Meyer, David A
2015-10-01
Conditional random fields (CRFs) are a flexible yet powerful probabilistic approach and have shown advantages for popular applications in various areas, including text analysis, bioinformatics, and computer vision. Traditional CRF models, however, are incapable of selecting relevant features as well as suppressing noise from noisy original features. Moreover, conventional optimization methods often converge slowly in solving the training procedure of CRFs, and will degrade significantly for tasks with a large number of samples and features. In this paper, we propose robust CRFs (RCRFs) to simultaneously select relevant features. An optimal gradient method (OGM) is further designed to train RCRFs efficiently. Specifically, the proposed RCRFs employ the l1 norm of the model parameters to regularize the objective used by traditional CRFs, therefore enabling discovery of the relevant unary features and pairwise features of CRFs. In each iteration of OGM, the gradient direction is determined jointly by the current gradient together with the historical gradients, and the Lipschitz constant is leveraged to specify the proper step size. We show that an OGM can tackle the RCRF model training very efficiently, achieving the optimal convergence rate [Formula: see text] (where k is the number of iterations). This convergence rate is theoretically superior to the convergence rate O(1/k) of previous first-order optimization methods. Extensive experiments performed on three practical image segmentation tasks demonstrate the efficacy of OGM in training our proposed RCRFs. PMID:26080050
Random hypergraphs and their applications
NASA Astrophysics Data System (ADS)
Ghoshal, Gourab; Zlatić, Vinko; Caldarelli, Guido; Newman, M. E. J.
2009-06-01
In the last few years we have witnessed the emergence, primarily in online communities, of new types of social networks that require for their representation more complex graph structures than have been employed in the past. One example is the folksonomy, a tripartite structure of users, resources, and tags—labels collaboratively applied by the users to the resources in order to impart meaningful structure on an otherwise undifferentiated database. Here we propose a mathematical model of such tripartite structures that represents them as random hypergraphs. We show that it is possible to calculate many properties of this model exactly in the limit of large network size and we compare the results against observations of a real folksonomy, that of the online photography website Flickr. We show that in some cases the model matches the properties of the observed network well, while in others there are significant differences, which we find to be attributable to the practice of multiple tagging, i.e., the application by a single user of many tags to one resource or one tag to many resources.
Random hypergraphs and their applications.
Ghoshal, Gourab; Zlatić, Vinko; Caldarelli, Guido; Newman, M E J
2009-06-01
In the last few years we have witnessed the emergence, primarily in online communities, of new types of social networks that require for their representation more complex graph structures than have been employed in the past. One example is the folksonomy, a tripartite structure of users, resources, and tags-labels collaboratively applied by the users to the resources in order to impart meaningful structure on an otherwise undifferentiated database. Here we propose a mathematical model of such tripartite structures that represents them as random hypergraphs. We show that it is possible to calculate many properties of this model exactly in the limit of large network size and we compare the results against observations of a real folksonomy, that of the online photography website Flickr. We show that in some cases the model matches the properties of the observed network well, while in others there are significant differences, which we find to be attributable to the practice of multiple tagging, i.e., the application by a single user of many tags to one resource or one tag to many resources. PMID:19658575
Dynamic computing random access memory
NASA Astrophysics Data System (ADS)
Traversa, F. L.; Bonani, F.; Pershin, Y. V.; Di Ventra, M.
2014-07-01
The present von Neumann computing paradigm involves a significant amount of information transfer between a central processing unit and memory, with concomitant limitations in the actual execution speed. However, it has been recently argued that a different form of computation, dubbed memcomputing (Di Ventra and Pershin 2013 Nat. Phys. 9 200-2) and inspired by the operation of our brain, can resolve the intrinsic limitations of present day architectures by allowing for computing and storing of information on the same physical platform. Here we show a simple and practical realization of memcomputing that utilizes easy-to-build memcapacitive systems. We name this architecture dynamic computing random access memory (DCRAM). We show that DCRAM provides massively-parallel and polymorphic digital logic, namely it allows for different logic operations with the same architecture, by varying only the control signals. In addition, by taking into account realistic parameters, its energy expenditures can be as low as a few fJ per operation. DCRAM is fully compatible with CMOS technology, can be realized with current fabrication facilities, and therefore can really serve as an alternative to the present computing technology.
Dynamic computing random access memory.
Traversa, F L; Bonani, F; Pershin, Y V; Di Ventra, M
2014-07-18
The present von Neumann computing paradigm involves a significant amount of information transfer between a central processing unit and memory, with concomitant limitations in the actual execution speed. However, it has been recently argued that a different form of computation, dubbed memcomputing (Di Ventra and Pershin 2013 Nat. Phys. 9 200-2) and inspired by the operation of our brain, can resolve the intrinsic limitations of present day architectures by allowing for computing and storing of information on the same physical platform. Here we show a simple and practical realization of memcomputing that utilizes easy-to-build memcapacitive systems. We name this architecture dynamic computing random access memory (DCRAM). We show that DCRAM provides massively-parallel and polymorphic digital logic, namely it allows for different logic operations with the same architecture, by varying only the control signals. In addition, by taking into account realistic parameters, its energy expenditures can be as low as a few fJ per operation. DCRAM is fully compatible with CMOS technology, can be realized with current fabrication facilities, and therefore can really serve as an alternative to the present computing technology. PMID:24972387
Migration in asymmetric, random environments
NASA Astrophysics Data System (ADS)
Deem, Michael; Wang, Dong
Migration is a key mechanism for expansion of communities. As a population migrates, it experiences a changing environment. In heterogeneous environments, rapid adaption is key to the evolutionary success of the population. In the case of human migration, environmental heterogeneity is naturally asymmetric in the North-South and East-West directions. We here consider migration in random, asymmetric, modularly correlated environments. Knowledge about the environment determines the fitness of each individual. We find that the speed of migration is proportional to the inverse of environmental change, and in particular we find that North-South migration rates are lower than East-West migration rates. Fast communication within the population of pieces of knowledge between individuals, similar to horizontal gene transfer in genetic systems, can help to spread beneficial knowledge among individuals. We show that increased modularity of the relation between knowledge and fitness enhances the rate of evolution. We investigate the relation between optimal information exchange rate and modularity of the dependence of fitness on knowledge. These results for the dependence of migration rate on heterogeneity, asymmetry, and modularity are consistent with existing archaeological facts.
Aggregated Recommendation through Random Forests
2014-01-01
Aggregated recommendation refers to the process of suggesting one kind of items to a group of users. Compared to user-oriented or item-oriented approaches, it is more general and, therefore, more appropriate for cold-start recommendation. In this paper, we propose a random forest approach to create aggregated recommender systems. The approach is used to predict the rating of a group of users to a kind of items. In the preprocessing stage, we merge user, item, and rating information to construct an aggregated decision table, where rating information serves as the decision attribute. We also model the data conversion process corresponding to the new user, new item, and both new problems. In the training stage, a forest is built for the aggregated training set, where each leaf is assigned a distribution of discrete rating. In the testing stage, we present four predicting approaches to compute evaluation values based on the distribution of each tree. Experiments results on the well-known MovieLens dataset show that the aggregated approach maintains an acceptable level of accuracy. PMID:25180204
Hierarchy in directed random networks
NASA Astrophysics Data System (ADS)
Mones, Enys
2013-02-01
In recent years, the theory and application of complex networks have been quickly developing in a markable way due to the increasing amount of data from real systems and the fruitful application of powerful methods used in statistical physics. Many important characteristics of social or biological systems can be described by the study of their underlying structure of interactions. Hierarchy is one of these features that can be formulated in the language of networks. In this paper we present some (qualitative) analytic results on the hierarchical properties of random network models with zero correlations and also investigate, mainly numerically, the effects of different types of correlations. The behavior of the hierarchy is different in the absence and the presence of giant components. We show that the hierarchical structure can be drastically different if there are one-point correlations in the network. We also show numerical results suggesting that the hierarchy does not change monotonically with the correlations and there is an optimal level of nonzero correlations maximizing the level of hierarchy.
Organization of growing random networks
Krapivsky, P. L.; Redner, S.
2001-06-01
The organizational development of growing random networks is investigated. These growing networks are built by adding nodes successively, and linking each to an earlier node of degree k with an attachment probability A{sub k}. When A{sub k} grows more slowly than linearly with k, the number of nodes with k links, N{sub k}(t), decays faster than a power law in k, while for A{sub k} growing faster than linearly in k, a single node emerges which connects to nearly all other nodes. When A{sub k} is asymptotically linear, N{sub k}(t){similar_to}tk{sup {minus}{nu}}, with {nu} dependent on details of the attachment probability, but in the range 2{lt}{nu}{lt}{infinity}. The combined age and degree distribution of nodes shows that old nodes typically have a large degree. There is also a significant correlation in the degrees of neighboring nodes, so that nodes of similar degree are more likely to be connected. The size distributions of the in and out components of the network with respect to a given node{emdash}namely, its {open_quotes}descendants{close_quotes} and {open_quotes}ancestors{close_quotes}{emdash}are also determined. The in component exhibits a robust s{sup {minus}2} power-law tail, where s is the component size. The out component has a typical size of order lnt, and it provides basic insights into the genealogy of the network.
Aggregated recommendation through random forests.
Zhang, Heng-Ru; Min, Fan; He, Xu
2014-01-01
Aggregated recommendation refers to the process of suggesting one kind of items to a group of users. Compared to user-oriented or item-oriented approaches, it is more general and, therefore, more appropriate for cold-start recommendation. In this paper, we propose a random forest approach to create aggregated recommender systems. The approach is used to predict the rating of a group of users to a kind of items. In the preprocessing stage, we merge user, item, and rating information to construct an aggregated decision table, where rating information serves as the decision attribute. We also model the data conversion process corresponding to the new user, new item, and both new problems. In the training stage, a forest is built for the aggregated training set, where each leaf is assigned a distribution of discrete rating. In the testing stage, we present four predicting approaches to compute evaluation values based on the distribution of each tree. Experiments results on the well-known MovieLens dataset show that the aggregated approach maintains an acceptable level of accuracy. PMID:25180204
Randomized Trial of Behavior Therapy for Adults with Tourette’s Disorder
Wilhelm, Sabine; Peterson, Alan L.; Piacentini, John; Woods, Douglas W.; Deckersbach, Thilo; Sukhodolsky, Denis G.; Chang, Susanna; Liu, Haibei; Dziura, James; Walkup, John T.; Scahill, Lawrence
2013-01-01
Context Tics in Tourette syndrome begin in childhood, peak in early adolescence, and often decline by early adulthood. However, some adult patients continue to have impairing tics. Medications for tics are often effective but can cause adverse effects. Behavior therapy may offer an alternative but has not been examined in a large-scale controlled trial in adults. Objective To test the efficacy of a comprehensive behavioral intervention for tics in adults with Tourette syndrome of at least moderate severity. Design A randomized, controlled trial with posttreatment evaluations at 3 and 6 months for positive responders. Setting Three outpatient research clinics. Subjects Subjects (N = 122; 78 males, age 16 to 69 years) with Tourette syndrome or chronic tic disorder. Interventions Eight sessions of Comprehensive Behavioral Intervention for Tics or 8 sessions of supportive treatment delivered over 10 weeks. Subjects showing a positive response were given 3 monthly booster sessions. Main Outcome Measures Total Tic score of the Yale Global Tic Severity Scale and the Improvement scale of the Clinical Global Impression rated by a clinician blind to treatment assignment. Results Behavior therapy was associated with a significantly greater decrease on the Yale Global Tic Severity Scale (24.0 ± 6.47 to 17.8 ± 7.32) from baseline to endpoint compared to the control treatment (21.8 ± 6.59 to 19.3 ± 7.40) (P < .001; effect size = 0.57). Twenty-four of 63 subjects (38.1%) in CBIT were rated as Much Improved or Very Much Improved on the Clinical Global Impression-Improvement scale compared to 6.8% (4 of 63) in the control group (P < .0001). Attrition was 13.9% with no difference across groups. Subjects in behavior therapy available for assessment at 6 months posttreatment showed continued benefit. Conclusions Comprehensive behavior therapy is a safe and effective intervention for adults with Tourette syndrome. PMID:22868933
Sustained-Release Methylphenidate in a Randomized Trial of Treatment of Methamphetamine Use Disorder
Ling, Walter; Chang, Linda; Hillhouse, Maureen; Ang, Alfonso; Striebel, Joan; Jenkins, Jessica; Hernandez, Jasmin; Olaer, Mary; Mooney, Larissa; Reed, Susan; Fukaya, Erin; Kogachi, Shannon; Alicata, Daniel; Holmes, Nataliya; Esagoff, Asher
2014-01-01
Background and aims No effective pharmacotherapy for methamphetamine (MA) use disorder has yet been found. This study evaluated sustained-release methylphenidate (MPH-SR) compared with placebo (PLA) for treatment of MA use disorder in people also undergoing behavioural support and motivational incentives. Design This was a randomized, double-blind, placebo-controlled design with MPH-SR or PLA provided for 10 weeks (active phase) followed by 4 weeks of single-blind PLA. Twice-weekly clinic visits, weekly group counseling (CBT), and motivational incentives (MI) for MA-negative urine drug screens (UDS) were included. Setting Treatment sites were in Los Angeles, California (LA) and Honolulu, Hawaii (HH), USA. Participants 110 MA-dependent (via DSM-IV) participants (LA = 90; HH = 20). Measurements The primary outcome measure is self-reported days of MA use during the last 30 days of the active phase. Included in the current analyses are drug use (UDS and self-report), retention, craving, compliance (dosing, CBT, MI), adverse events, and treatment satisfaction. Findings No difference was found between treatment groups in self-reported days of MA use during the last 30 days of the active phase (p=0.22). In planned secondary outcomes analyses, however, the MPH group had fewer self-reported MA use days from baseline through the active phase compared with the PLA group (p=0.05). The MPH group also had lower craving scores and fewer marijuana-positive UDS than the PLA group in the last 30 days of the active phase. The two groups had similar retention, other drug use, adverse events, and treatment satisfaction. Conclusions Methylphenidate may lead to a reduction in concurrent methamphetamine use when provided as treatment for patients undergoing behavioural support for moderate to severe methamphetamine use disorder but this requires confirmation. PMID:24825486
A randomized controlled trial of atomoxetine in generalized social anxiety disorder.
Ravindran, Lakshmi N; Kim, Daniel S; Letamendi, Andrea M; Stein, Murray B
2009-12-01
The current mainstays of social anxiety disorder pharmacotherapy are serotonergic agents, with less known about the efficacy of more noradrenergic drugs. Atomoxetine (ATM), a highly selective norepinephrine reuptake inhibitor, is currently approved for the treatment of attention-deficit/hyperactivity disorder (ADHD). We describe the first controlled trial of ATM with respect to efficacy and tolerability in adults with the generalized subtype of social anxiety disorder (GSAD) without comorbid ADHD. Twenty-seven outpatients with clinically prevailing diagnoses of GSAD by the Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition were randomized in a 1:1 ratio to 10 weeks of double-blind flexible-dose treatment with either ATM 40-100 mg per day (n = 14) or placebo (n = 13). Primary efficacy outcome was score at end point on the Liebowitz Social Anxiety Scale in the intention-to-treat sample. There were no significant group differences in patients completing the study (ATM, 79%; placebo, 77%). Whereas ATM was well tolerated, there were no significant differences in clinical efficacy between ATM and placebo for GSAD. There were few responders overall (ATM, 21%; placebo, 33%), but proportions were similar in each group (chi [1, 26] = 0.47; P = 0.67). Analysis of variance with repeated measures on the Liebowitz Social Anxiety Scale was performed to detect any differential change in social anxiety symptoms between groups. A significant time effect was found (F = 8.71; P = 0.007), but the time-by-treatment interaction was nonsignificant (F = 0.013; P = 0.91). Although the small sample size limits confidence in the reported results, the comparable, and low, response rates for ATM and placebo suggest that in the absence of comorbid ADHD, ATM is unlikely to be an effective agent for the treatment of GSAD. PMID:19910721
Schittkowski, Michael P.; Antal, Andrea; Ambrus, Géza Gergely; Paulus, Walter; Dannhauer, Moritz; Michalik, Romualda; Mante, Alf; Bola, Michal; Lux, Anke; Kropf, Siegfried; Brandt, Stephan A.; Sabel, Bernhard A.
2016-01-01
Background Vision loss after optic neuropathy is considered irreversible. Here, repetitive transorbital alternating current stimulation (rtACS) was applied in partially blind patients with the goal of activating their residual vision. Methods We conducted a multicenter, prospective, randomized, double-blind, sham-controlled trial in an ambulatory setting with daily application of rtACS (n = 45) or sham-stimulation (n = 37) for 50 min for a duration of 10 week days. A volunteer sample of patients with optic nerve damage (mean age 59.1 yrs) was recruited. The primary outcome measure for efficacy was super-threshold visual fields with 48 hrs after the last treatment day and at 2-months follow-up. Secondary outcome measures were near-threshold visual fields, reaction time, visual acuity, and resting-state EEGs to assess changes in brain physiology. Results The rtACS-treated group had a mean improvement in visual field of 24.0% which was significantly greater than after sham-stimulation (2.5%). This improvement persisted for at least 2 months in terms of both within- and between-group comparisons. Secondary analyses revealed improvements of near-threshold visual fields in the central 5° and increased thresholds in static perimetry after rtACS and improved reaction times, but visual acuity did not change compared to shams. Visual field improvement induced by rtACS was associated with EEG power-spectra and coherence alterations in visual cortical networks which are interpreted as signs of neuromodulation. Current flow simulation indicates current in the frontal cortex, eye, and optic nerve and in the subcortical but not in the cortical regions. Conclusion rtACS treatment is a safe and effective means to partially restore vision after optic nerve damage probably by modulating brain plasticity. This class 1 evidence suggests that visual fields can be improved in a clinically meaningful way. Trial Registration ClinicalTrials.gov NCT01280877 PMID:27355577
2011-01-01
Background Randomized controlled trials (RCT) are required to test relationships between physical activity and cognition in children, but these must be informed by exploratory studies. This study aimed to inform future RCT by: conducting practical utility and reliability studies to identify appropriate cognitive outcome measures; piloting an RCT of a 10 week physical education (PE) intervention which involved 2 hours per week of aerobically intense PE compared to 2 hours of standard PE (control). Methods 64 healthy children (mean age 6.2 yrs SD 0.3; 33 boys) recruited from 6 primary schools. Outcome measures were the Cambridge Neuropsychological Test Battery (CANTAB), the Attention Network Test (ANT), the Cognitive Assessment System (CAS) and the short form of the Connor's Parent Rating Scale (CPRS:S). Physical activity was measured habitually and during PE sessions using the Actigraph accelerometer. Results Test- retest intraclass correlations from CANTAB Spatial Span (r 0.51) and Spatial Working Memory Errors (0.59) and ANT Reaction Time (0.37) and ANT Accuracy (0.60) were significant, but low. Physical activity was significantly higher during intervention vs. control PE sessions (p < 0.0001). There were no significant differences between intervention and control group changes in CAS scores. Differences between intervention and control groups favoring the intervention were observed for CANTAB Spatial Span, CANTAB Spatial Working Memory Errors, and ANT Accuracy. Conclusions The present study has identified practical and age-appropriate cognitive and behavioral outcome measures for future RCT, and identified that schools are willing to increase PE time. Trial registration number ISRCTN70853932 (http://www.controlled-trials.com) PMID:22034850
Depp, Colin A; Ceglowski, Jenni; Wang, Vicki C; Yaghouti, Faraz; Mausbach, Brent T; Thompson, Wesley K; Granholm, Eric L
2014-01-01
Background Psychosocial interventions for bipolar disorder are frequently unavailable and resource intensive. Mobile technology may improve access to evidence-based interventions and may increase their efficacy. We evaluated the feasibility, acceptability and efficacy of an augmentative mobile ecological momentary intervention targeting self-management of mood symptoms. Methods This was a randomized single-blind controlled trial with 82 consumers diagnosed with bipolar disorder who completed a four-session psychoeducational intervention and were assigned to 10 weeks of either: 1) mobile device delivered interactive intervention linking patient-reported mood states with personalized self-management strategies, or 2) paper-and-pencil mood monitoring. Participants were assessed at baseline, 6 weeks (mid-point), 12 weeks (post-treatment), and 24 weeks (follow up) with clinician-rated depression and mania scales and self-reported functioning. Results Retention at 12 weeks was 93% and both conditions were associated with high satisfaction. Compared to the paper-and-pencil condition, participants in the augmented mobile intervention condition showed significantly greater reductions in depressive symptoms at 6 and 12 weeks (Cohen's d for both were d=0.48). However, these effects were not maintained at 24-week follow up. Conditions did not differ significantly in the impact on manic symptoms or functional impairment. Limitations This was not a definitive trial and was not powered to detect moderators and mediators. Conclusions Automated mobile-phone intervention is feasible, acceptable, and may enhance the impact of brief psychoeducation on depressive symptoms in bipolar disorder. However, sustainment of gains from symptom self-management mobile interventions, once stopped, may be limited. PMID:25479050
Hardy, Joseph L.; Nelson, Rolf A.; Thomason, Moriah E.; Sternberg, Daniel A.; Katovich, Kiefer; Farzin, Faraz; Scanlon, Michael
2015-01-01
Background A variety of studies have demonstrated gains in cognitive ability following cognitive training interventions. However, other studies have not shown such gains, and questions remain regarding the efficacy of specific cognitive training interventions. Cognitive training research often involves programs made up of just one or a few exercises, targeting limited and specific cognitive endpoints. In addition, cognitive training studies typically involve small samples that may be insufficient for reliable measurement of change. Other studies have utilized training periods that were too short to generate reliable gains in cognitive performance. Methods The present study evaluated an online cognitive training program comprised of 49 exercises targeting a variety of cognitive capacities. The cognitive training program was compared to an active control condition in which participants completed crossword puzzles. All participants were recruited, trained, and tested online (N = 4,715 fully evaluable participants). Participants in both groups were instructed to complete one approximately 15-minute session at least 5 days per week for 10 weeks. Results Participants randomly assigned to the treatment group improved significantly more on the primary outcome measure, an aggregate measure of neuropsychological performance, than did the active control group (Cohen’s d effect size = 0.255; 95% confidence interval = [0.198, 0.312]). Treatment participants showed greater improvements than controls on speed of processing, short-term memory, working memory, problem solving, and fluid reasoning assessments. Participants in the treatment group also showed greater improvements on self-reported measures of cognitive functioning, particularly on those items related to concentration compared to the control group (Cohen’s d = 0.249; 95% confidence interval = [0.191, 0.306]). Conclusion Taken together, these results indicate that a varied training program composed of a number of
Individual Differences Methods for Randomized Experiments
ERIC Educational Resources Information Center
Tucker-Drob, Elliot M.
2011-01-01
Experiments allow researchers to randomly vary the key manipulation, the instruments of measurement, and the sequences of the measurements and manipulations across participants. To date, however, the advantages of randomized experiments to manipulate both the aspects of interest and the aspects that threaten internal validity have been primarily…
Brownian Optimal Stopping and Random Walks
Lamberton, D.
2002-06-05
One way to compute the value function of an optimal stopping problem along Brownian paths consists of approximating Brownian motion by a random walk. We derive error estimates for this type of approximation under various assumptions on the distribution of the approximating random walk.
The Design of Cluster Randomized Crossover Trials
ERIC Educational Resources Information Center
Rietbergen, Charlotte; Moerbeek, Mirjam
2011-01-01
The inefficiency induced by between-cluster variation in cluster randomized (CR) trials can be reduced by implementing a crossover (CO) design. In a simple CO trial, each subject receives each treatment in random order. A powerful characteristic of this design is that each subject serves as its own control. In a CR CO trial, clusters of subjects…
Color Charts, Esthetics, and Subjective Randomness
ERIC Educational Resources Information Center
Sanderson, Yasmine B.
2012-01-01
Color charts, or grids of evenly spaced multicolored dots or squares, appear in the work of modern artists and designers. Often the artist/designer distributes the many colors in a way that could be described as "random," that is, without an obvious pattern. We conduct a statistical analysis of 125 "random-looking" art and design color charts and…
Random numbers certified by Bell's theorem.
Pironio, S; Acín, A; Massar, S; de la Giroday, A Boyer; Matsukevich, D N; Maunz, P; Olmschenk, S; Hayes, D; Luo, L; Manning, T A; Monroe, C
2010-04-15
Randomness is a fundamental feature of nature and a valuable resource for applications ranging from cryptography and gambling to numerical simulation of physical and biological systems. Random numbers, however, are difficult to characterize mathematically, and their generation must rely on an unpredictable physical process. Inaccuracies in the theoretical modelling of such processes or failures of the devices, possibly due to adversarial attacks, limit the reliability of random number generators in ways that are difficult to control and detect. Here, inspired by earlier work on non-locality-based and device-independent quantum information processing, we show that the non-local correlations of entangled quantum particles can be used to certify the presence of genuine randomness. It is thereby possible to design a cryptographically secure random number generator that does not require any assumption about the internal working of the device. Such a strong form of randomness generation is impossible classically and possible in quantum systems only if certified by a Bell inequality violation. We carry out a proof-of-concept demonstration of this proposal in a system of two entangled atoms separated by approximately one metre. The observed Bell inequality violation, featuring near perfect detection efficiency, guarantees that 42 new random numbers are generated with 99 per cent confidence. Our results lay the groundwork for future device-independent quantum information experiments and for addressing fundamental issues raised by the intrinsic randomness of quantum theory. PMID:20393558
Random ambience using high fidelity images
NASA Astrophysics Data System (ADS)
Abu, Nur Azman; Sahib, Shahrin
2011-06-01
Most of the secure communication nowadays mandates true random keys as an input. These operations are mostly designed and taken care of by the developers of the cryptosystem. Due to the nature of confidential crypto development today, pseudorandom keys are typically designed and still preferred by the developers of the cryptosystem. However, these pseudorandom keys are predictable, periodic and repeatable, hence they carry minimal entropy. True random keys are believed to be generated only via hardware random number generators. Careful statistical analysis is still required to have any confidence the process and apparatus generates numbers that are sufficiently random to suit the cryptographic use. In this underlying research, each moment in life is considered unique in itself. The random key is unique for the given moment generated by the user whenever he or she needs the random keys in practical secure communication. An ambience of high fidelity digital image shall be tested for its randomness according to the NIST Statistical Test Suite. Recommendation on generating a simple 4 megabits per second random cryptographic keys live shall be reported.
A random Q-switched fiber laser.
Tang, Yulong; Xu, Jianqiu
2015-01-01
Extensive studies have been performed on random lasers in which multiple-scattering feedback is used to generate coherent emission. Q-switching and mode-locking are well-known routes for achieving high peak power output in conventional lasers. However, in random lasers, the ubiquitous random cavities that are formed by multiple scattering inhibit energy storage, making Q-switching impossible. In this paper, widespread Rayleigh scattering arising from the intrinsic micro-scale refractive-index irregularities of fiber cores is used to form random cavities along the fiber. The Q-factor of the cavity is rapidly increased by stimulated Brillouin scattering just after the spontaneous emission is enhanced by random cavity resonances, resulting in random Q-switched pulses with high brightness and high peak power. This report is the first observation of high-brightness random Q-switched laser emission and is expected to stimulate new areas of scientific research and applications, including encryption, remote three-dimensional random imaging and the simulation of stellar lasing. PMID:25797520
Evaluation of the Randomized Multiple Choice Format.
ERIC Educational Resources Information Center
Harke, Douglas James
Each physics problem used in evaluating the effectiveness of Randomized Multiple Choice (RMC) tests was stated in the conventional manner and was followed by several multiple choice items corresponding to the steps in a written solution but presented in random order. Students were instructed to prepare a written answer and to use it to answer the…
Effect Sizes in Cluster-Randomized Designs
ERIC Educational Resources Information Center
Hedges, Larry V.
2007-01-01
Multisite research designs involving cluster randomization are becoming increasingly important in educational and behavioral research. Researchers would like to compute effect size indexes based on the standardized mean difference to compare the results of cluster-randomized studies (and corresponding quasi-experiments) with other studies and to…
Synchronization Properties of Random Piecewise Isometries
NASA Astrophysics Data System (ADS)
Gorodetski, Anton; Kleptsyn, Victor
2016-08-01
We study the synchronization properties of the random double rotations on tori. We give a criterion that show when synchronization is present in the case of random double rotations on the circle and prove that it is always absent in dimensions two and higher.
Truly random bit generation based on a novel random Brillouin fiber laser.
Xiang, Dao; Lu, Ping; Xu, Yanping; Gao, Song; Chen, Liang; Bao, Xiaoyi
2015-11-15
We propose a novel dual-emission random Brillouin fiber laser (RBFL) with bidirectional pumping operation. Numerical simulations and experimental verification of the chaotic temporal and statistical properties of the RBFL are conducted, revealing intrinsic unpredictable intensity fluctuations and two completely uncorrelated laser outputs. A random bit generator based on quantum noise sources in the random Fabry-Perot resonator of the RBFL is realized at a bit rate of 5 Mbps with verified randomness. PMID:26565888
2014-01-01
(primary outcome) and physical exertion during work, social capital and work ability (secondary outcomes) is assessed at baseline and 10-week follow-up. Further, postural balance and mechanical muscle function is assessed during clinical examination at baseline and follow-up. Discussion This cluster randomized trial will investigate the change in self-rated average pain intensity in the back, neck and shoulder after either 10 weeks of physical exercise at the workplace or at home. Trial registration ClinicalTrials.gov (NCT01921764). PMID:24708570
Enhancing superconducting critical current by randomness
NASA Astrophysics Data System (ADS)
Wang, Y. L.; Thoutam, L. R.; Xiao, Z. L.; Shen, B.; Pearson, J. E.; Divan, R.; Ocola, L. E.; Crabtree, G. W.; Kwok, W. K.
2016-01-01
The key ingredient of high critical currents in a type-II superconductor is defect sites that pin vortices. Contrary to earlier understanding on nanopatterned artificial pinning, here we show unequivocally the advantages of a random pinscape over an ordered array in a wide magnetic field range. We reveal that the better performance of a random pinscape is due to the variation of its local density of pinning sites (LDOPS), which mitigates the motion of vortices. This is confirmed by achieving even higher enhancement of the critical current through a conformally mapped random pinscape, where the distribution of the LDOPS is further enlarged. The demonstrated key role of LDOPS in enhancing superconducting critical currents gets at the heart of random versus commensurate pinning. Our findings highlight the importance of random pinscapes in enhancing the superconducting critical currents of applied superconductors.
Self-testing quantum random number generator.
Lunghi, Tommaso; Brask, Jonatan Bohr; Lim, Charles Ci Wen; Lavigne, Quentin; Bowles, Joseph; Martin, Anthony; Zbinden, Hugo; Brunner, Nicolas
2015-04-17
The generation of random numbers is a task of paramount importance in modern science. A central problem for both classical and quantum randomness generation is to estimate the entropy of the data generated by a given device. Here we present a protocol for self-testing quantum random number generation, in which the user can monitor the entropy in real time. Based on a few general assumptions, our protocol guarantees continuous generation of high quality randomness, without the need for a detailed characterization of the devices. Using a fully optical setup, we implement our protocol and illustrate its self-testing capacity. Our work thus provides a practical approach to quantum randomness generation in a scenario of trusted but error-prone devices. PMID:25933297
Generation of Random Numbers by Micromechanism
NASA Astrophysics Data System (ADS)
Mita, Makoto; Toshiyoshi, Hiroshi; Ataka, Manabu; Fujita, Hiroyuki
We have successfully developed a novel micromechanism of random number generator (RNG) by using the silicon micromachining technique. The MEM(Micro Electro Mechanical)RNG produce a series of random numbers by using the pull-in instability of electrostatic actuation operated with a typical dc 150 volt. The MEM RNG is made by the deep reactive ion etching of a silicon-on-insulator(SOI) wafer, and is very small compared with the conventional RNG hardware based on the randomness of thermal noise or isotope radiation. Quality of randomness has been experimentally confirmed by the self-correlation study of the generated series of numbers. The MEM RNG proposed here would be a true random number generation, which is needed for the highly secured encryption system of today’s information technology.
Randomization in clinical trials: conclusions and recommendations.
Lachin, J M; Matts, J P; Wei, L J
1988-12-01
The statistical properties of simple (complete) randomization, permuted-block (or simply blocked) randomization, and the urn adaptive biased-coin randomization are summarized. These procedures are contrasted to covariate adaptive procedures such as minimization and to response adaptive procedures such as the play-the-winner rule. General recommendations are offered regarding the use of complete, permuted-block, or urn randomization. In a large double-masked trial, any of these procedures may be acceptable. For a given trial, the relative merits of each procedure should be carefully weighed in relation to the characteristics of the trial. Important considerations are the size of the trial, overall as well as within the smallest subgroup to be employed in a subgroup-specific analysis, whether or not the trial is to be masked, and the resources needed to perform the proper randomization-based permutational analysis. PMID:3203526
Bright emission from a random Raman laser
Hokr, Brett H.; Bixler, Joel N.; Cone, Michael T.; Mason, John D.; Beier, Hope T.; Noojin, Gary D.; Petrov, Georgi I.; Golovan, Leonid A.; Thomas, Robert J.; Rockwell, Benjamin A.; Yakovlev, Vladislav V.
2014-01-01
Random lasers are a developing class of light sources that utilize a highly disordered gain medium as opposed to a conventional optical cavity. Although traditional random lasers often have a relatively broad emission spectrum, a random laser that utilizes vibration transitions via Raman scattering allows for an extremely narrow bandwidth, on the order of 10 cm−1. Here we demonstrate the first experimental evidence of lasing via a Raman interaction in a bulk three-dimensional random medium, with conversion efficiencies on the order of a few percent. Furthermore, Monte Carlo simulations are used to study the complex spatial and temporal dynamics of nonlinear processes in turbid media. In addition to providing a large signal, characteristic of the Raman medium, the random Raman laser offers us an entirely new tool for studying the dynamics of gain in a turbid medium. PMID:25014073
Gold nanostars for random lasing enhancement.
Ziegler, Johannes; Djiango, Martin; Vidal, Cynthia; Hrelescu, Calin; Klar, Thomas A
2015-06-15
We demonstrate random lasing with star-shaped gold nanoparticles ("nanostars") as scattering centers embedded in a dye-doped gain medium. It is experimentally shown that star-shaped gold nanoparticles outperform those of conventional shapes, such as spherical or prolate nanoparticles. The nanoparticles are randomly distributed within a thin film of gain medium, forming resonators which support coherent laser modes. Driven by single-pulsed excitation, the random lasers exhibit coherent lasing thresholds in the order of 0.9 mJ/cm(2) and spectrally narrow emission peaks with linewidths less than 0.2 nm. The distinguished random laser comprising nanostars is likely to take advantage of the high plasmonic field enhancements, localized at the spiky tips of the nanostars, which improves the feedback mechanism for lasing and increases the emission intensity of the random laser. PMID:26193498
Self-Testing Quantum Random Number Generator
NASA Astrophysics Data System (ADS)
Lunghi, Tommaso; Brask, Jonatan Bohr; Lim, Charles Ci Wen; Lavigne, Quentin; Bowles, Joseph; Martin, Anthony; Zbinden, Hugo; Brunner, Nicolas
2015-04-01
The generation of random numbers is a task of paramount importance in modern science. A central problem for both classical and quantum randomness generation is to estimate the entropy of the data generated by a given device. Here we present a protocol for self-testing quantum random number generation, in which the user can monitor the entropy in real time. Based on a few general assumptions, our protocol guarantees continuous generation of high quality randomness, without the need for a detailed characterization of the devices. Using a fully optical setup, we implement our protocol and illustrate its self-testing capacity. Our work thus provides a practical approach to quantum randomness generation in a scenario of trusted but error-prone devices.
ERIC Educational Resources Information Center
Marcus, Sue M.; Stuart, Elizabeth A.; Wang, Pei; Shadish, William R.; Steiner, Peter M.
2012-01-01
Although randomized studies have high internal validity, generalizability of the estimated causal effect from randomized clinical trials to real-world clinical or educational practice may be limited. We consider the implication of randomized assignment to treatment, as compared with choice of preferred treatment as it occurs in real-world…
Gajecki, Mikael; Johansson, Magnus; Blankers, Matthijs; Sinadinovic, Kristina; Stenlund-Gens, Erik; Berman, Anne H.
2016-01-01
Background The Internet has increasingly been studied as mode of delivery for interventions targeting problematic alcohol use. Most interventions have been fully automated, but some research suggests that adding counselor guidance may improve alcohol consumption outcomes. Methods An eight-module Internet-based self-help program based on cognitive behavioral therapy (CBT) was tested among Internet help-seekers. Eighty participants with problematic alcohol use according to the Alcohol Use Disorders Identification Test (AUDIT; scores of ≥ 6 for women and ≥ 8 for men) were recruited online from an open access website and randomized into three different groups. All groups were offered the same self-help program, but participants in two of the three groups received Internet-based counselor guidance in addition to the self-help program. One of the guidance groups was given a choice between guidance via asynchronous text messages or synchronous text-based chat, while the other guidance group received counselor guidance via asynchronous text messages only. Results In the choice group, 65% (13 of 20 participants) chose guidance via asynchronous text messages. At the 10-week post-treatment follow-up, an intention-to-treat (ITT) analysis showed that participants in the two guidance groups (choice and messages) reported significantly lower past week alcohol consumption compared to the group without guidance; 10.8 (SD = 12.1) versus 22.6 (SD = 18.4); p = 0.001; Cohen’s d = 0.77. Participants in both guidance groups reported significantly lower scores on the AUDIT at follow-up compared to the group without guidance, with a mean score of 14.4 (SD = 5.2) versus 18.2 (SD = 5.9); p = 0.003; Cohen’s d = 0.68. A higher proportion of participants in the guidance groups said that they would recommend the program compared to the group without guidance (81% for choice; 93% for messages versus 47% for self-help). Conclusion Self-help programs for problematic alcohol use can be more
Jay, Kenneth; schraefel, mc; Andersen, Christoffer H; Ebbesen, Frederik S; Christiansen, David H; Skotte, Jørgen; Zebis, Mette K; Andersen, Lars L
2013-01-01
Objective: To determine the effect of small daily amounts of progressive resistance training on rapid force development of painful neck/shoulder muscles. Methods: 198 generally healthy adults with frequent neck/shoulder muscle pain (mean: age 43·1 years, computer use 93% of work time, 88% women, duration of pain 186 day during the previous year) were randomly allocated to 2- or 12 min of daily progressive resistance training with elastic tubing or to a control group receiving weekly information on general health. A blinded assessor took measures at baseline and at 10-week follow-up; participants performed maximal voluntary contractions at a static 90-degree shoulder joint angle. Rapid force development was determined as the rate of torque development and maximal muscle strength was determined as the peak torque. Results: Compared with the control group, rate of torque development increased 31·0 Nm s−1 [95% confidence interval: (1·33–11·80)] in the 2-min group and 33·2 Nm s−1 (1·66–12·33) in the 12-min group from baseline to 10-week follow-up, corresponding to an increase of 16·0% and 18·2% for the two groups, respectively. The increase was significantly different compared to controls (P<0·05) for both training groups. Maximal muscle strength increased only ∼5–6% [mean and 95% confidence interval for 2- and 12-min groups to control, respectively: 2·5 Nm (0·05–0·73) and 2·2 Nm (0·01–0·70)]. No significant differences between the 2- and 12-min groups were evident. A weak but significant relationship existed between changes in rapid force development and pain (r = 0·27, P<0·01), but not between changes in maximal muscle strength and pain. Conclusion: Small daily amounts of progressive resistance training in adults with frequent neck/shoulder pain increases rapid force development and, to a less extent, maximal force capacity. PMID:23758661
Garland, Eric L; Roberts-Lewis, Amelia; Tronnier, Christine D; Graves, Rebecca; Kelley, Karen
2016-02-01
In many clinical settings, there is a high comorbidity between substance use disorders, psychiatric disorders, and traumatic stress. Novel therapies are needed to address these co-occurring issues efficiently. The aim of the present study was to conduct a pragmatic randomized controlled trial comparing Mindfulness-Oriented Recovery Enhancement (MORE) to group Cognitive-Behavioral Therapy (CBT) and treatment-as-usual (TAU) for previously homeless men residing in a therapeutic community. Men with co-occurring substance use and psychiatric disorders, as well as extensive trauma histories, were randomly assigned to 10 weeks of group treatment with MORE (n = 64), CBT (n = 64), or TAU (n = 52). Study findings indicated that from pre-to post-treatment MORE was associated with modest yet significantly greater improvements in substance craving, post-traumatic stress, and negative affect than CBT, and greater improvements in post-traumatic stress and positive affect than TAU. A significant indirect effect of MORE on decreasing craving and post-traumatic stress by increasing dispositional mindfulness was observed, suggesting that MORE may target these issues via enhancing mindful awareness in everyday life. This pragmatic trial represents the first head-to-head comparison of MORE against an empirically-supported treatment for co-occurring disorders. Results suggest that MORE, as an integrative therapy designed to bolster self-regulatory capacity, may hold promise as a treatment for intersecting clinical conditions. PMID:26701171
Duality analysis on random planar lattices.
Ohzeki, Masayuki; Fujii, Keisuke
2012-11-01
The conventional duality analysis is employed to identify a location of a critical point on a uniform lattice without any disorder in its structure. In the present study, we deal with the random planar lattice, which consists of the randomized structure based on the square lattice. We introduce the uniformly random modification by the bond dilution and contraction on a part of the unit square. The random planar lattice includes the triangular and hexagonal lattices in extreme cases of a parameter to control the structure. A modern duality analysis fashion with real-space renormalization is found to be available for estimating the location of the critical points with a wide range of the randomness parameter. As a simple test bed, we demonstrate that our method indeed gives several critical points for the cases of the Ising and Potts models and the bond-percolation thresholds on the random planar lattice. Our method leads to not only such an extension of the duality analyses on the classical statistical mechanics but also a fascinating result associated with optimal error thresholds for a class of quantum error correction code, the surface code on the random planar lattice, which is known as a skillful technique to protect the quantum state. PMID:23214752
Reduction of display artifacts by random sampling
NASA Technical Reports Server (NTRS)
Ahumada, A. J., Jr.; Nagel, D. C.; Watson, A. B.; Yellott, J. I., Jr.
1983-01-01
The application of random-sampling techniques to remove visible artifacts (such as flicker, moire patterns, and paradoxical motion) introduced in TV-type displays by discrete sequential scanning is discussed and demonstrated. Sequential-scanning artifacts are described; the window of visibility defined in spatiotemporal frequency space by Watson and Ahumada (1982 and 1983) and Watson et al. (1983) is explained; the basic principles of random sampling are reviewed and illustrated by the case of the human retina; and it is proposed that the sampling artifacts can be replaced by random noise, which can then be shifted to frequency-space regions outside the window of visibility. Vertical sequential, single-random-sequence, and continuously renewed random-sequence plotting displays generating 128 points at update rates up to 130 Hz are applied to images of stationary and moving lines, and best results are obtained with the single random sequence for the stationary lines and with the renewed random sequence for the moving lines.
Duality analysis on random planar lattices
NASA Astrophysics Data System (ADS)
Ohzeki, Masayuki; Fujii, Keisuke
2012-11-01
The conventional duality analysis is employed to identify a location of a critical point on a uniform lattice without any disorder in its structure. In the present study, we deal with the random planar lattice, which consists of the randomized structure based on the square lattice. We introduce the uniformly random modification by the bond dilution and contraction on a part of the unit square. The random planar lattice includes the triangular and hexagonal lattices in extreme cases of a parameter to control the structure. A modern duality analysis fashion with real-space renormalization is found to be available for estimating the location of the critical points with a wide range of the randomness parameter. As a simple test bed, we demonstrate that our method indeed gives several critical points for the cases of the Ising and Potts models and the bond-percolation thresholds on the random planar lattice. Our method leads to not only such an extension of the duality analyses on the classical statistical mechanics but also a fascinating result associated with optimal error thresholds for a class of quantum error correction code, the surface code on the random planar lattice, which is known as a skillful technique to protect the quantum state.
Wave propagation in random granular chains.
Manjunath, Mohith; Awasthi, Amnaya P; Geubelle, Philippe H
2012-03-01
The influence of randomness on wave propagation in one-dimensional chains of spherical granular media is investigated. The interaction between the elastic spheres is modeled using the classical Hertzian contact law. Randomness is introduced in the discrete model using random distributions of particle mass, Young's modulus, or radius. Of particular interest in this study is the quantification of the attenuation in the amplitude of the impulse associated with various levels of randomness: two distinct regimes of decay are observed, characterized by an exponential or a power law, respectively. The responses are normalized to represent a vast array of material parameters and impact conditions. The virial theorem is applied to investigate the transfer from potential to kinetic energy components in the system for different levels of randomness. The level of attenuation in the two decay regimes is compared for the three different sources of randomness and it is found that randomness in radius leads to the maximum rate of decay in the exponential regime of wave propagation. PMID:22587093
Semi-device-independent randomness expansion with partially free random sources
NASA Astrophysics Data System (ADS)
Zhou, Yu-Qian; Li, Hong-Wei; Wang, Yu-Kun; Li, Dan-Dan; Gao, Fei; Wen, Qiao-Yan
2015-08-01
By proposing device-independent protocols, Pironio et al. [Nature (London) 464, 1021 (2010), 10.1038/nature09008] and Colbeck et al. [Nat. Phys. 8, 450 (2012), 10.1038/nphys2300] proved that new randomness can be generated by using perfectly free random sources or partially free ones as seed. Subsequently, Li et al. [Phys. Rev. A 84, 034301 (2011), 10.1103/PhysRevA.84.034301] studied this topic in the framework of semi-device-independent and proved that new randomness can be obtained from perfectly free random sources. Here we discuss whether and how partially free random sources bring us new randomness in a semi-device-independent scenario. We propose a semi-device-independent randomness expansion protocol with partially free random sources and obtain the condition that the partially free random sources should satisfy to generate new randomness. In the process of analysis, we acquire a two-dimensional quantum witness. Furthermore, we get the analytic relationship between the generated randomness and the two-dimensional quantum witness violation.
Low-noise Brillouin random fiber laser with a random grating-based resonator.
Xu, Yanping; Gao, Song; Lu, Ping; Mihailov, Stephen; Chen, Liang; Bao, Xiaoyi
2016-07-15
A novel Brillouin random fiber laser (BRFL) with the random grating-based Fabry-Perot (FP) resonator is proposed and demonstrated. Significantly enhanced random feedback from the femtosecond laser-fabricated random grating overwhelms the Rayleigh backscattering, which leads to efficient Brillouin gain for the lasing modes and reduced lasing threshold. Compared to the intensity and frequency noises of the Rayleigh feedback resonator, those of the proposed random laser are effectively suppressed due to the reduced resonating modes and mode competition resulting from the random grating-formed filters. Using the heterodyne technique, the linewidth of the coherent random lasing spike is measured to be ∼45.8 Hz. PMID:27420494
NASA Astrophysics Data System (ADS)
Avena, L.; Thomann, P.
2012-07-01
We perform simulations for one dimensional continuous-time random walks in two dynamic random environments with fast (independent spin-flips) and slow (simple symmetric exclusion) decay of space-time correlations, respectively. We focus on the asymptotic speeds and the scaling limits of such random walks. We observe different behaviors depending on the dynamics of the underlying random environment and the ratio between the jump rate of the random walk and the one of the environment. We compare our data with well known results for static random environment. We observe that the non-diffusive regime known so far only for the static case can occur in the dynamical setup too. Such anomalous fluctuations give rise to a new phase diagram. Further we discuss possible consequences for more general static and dynamic random environments.
Fatigue strength reduction model: RANDOM3 and RANDOM4 user manual, appendix 2
NASA Technical Reports Server (NTRS)
Boyce, Lola; Lovelace, Thomas B.
1989-01-01
The FORTRAN programs RANDOM3 and RANDOM4 are documented. They are based on fatigue strength reduction, using a probabilistic constitutive model. They predict the random lifetime of an engine component to reach a given fatigue strength. Included in this user manual are details regarding the theoretical backgrounds of RANDOM3 and RANDOM4. Appendix A gives information on the physical quantities, their symbols, FORTRAN names, and both SI and U.S. Customary units. Appendix B and C include photocopies of the actual computer printout corresponding to the sample problems. Appendices D and E detail the IMSL, Version 10(1), subroutines and functions called by RANDOM3 and RANDOM4 and SAS/GRAPH(2) programs that can be used to plot both the probability density functions (p.d.f.) and the cumulative distribution functions (c.d.f.).
Scale-invariant geometric random graphs
NASA Astrophysics Data System (ADS)
Xie, Zheng; Rogers, Tim
2016-03-01
We introduce and analyze a class of growing geometric random graphs that are invariant under rescaling of space and time. Directed connections between nodes are drawn according to influence zones that depend on node position in space and time, mimicking the heterogeneity and increased specialization found in growing networks. Through calculations and numerical simulations we explore the consequences of scale invariance for geometric random graphs generated this way. Our analysis reveals a dichotomy between scale-free and Poisson distributions of in- and out-degree, the existence of a random number of hub nodes, high clustering, and unusual percolation behavior. These properties are similar to those of empirically observed web graphs.
Physical tests for random numbers in simulations
Vattulainen, I.; Ala-Nissila, T.; Kankaala, K. , FIN-00014 University of Helsinki Department of Electrical Engineering, Tampere University of Technology, P.O. Box 692, FIN-3310, Tampere Center for Scientific Computing, P.O. Box 405, FIN-02101 Espoo )
1994-11-07
We propose three physical tests to measure correlations in random numbers used in Monte Carlo simulations. The first test uses autocorrelation times of certain physical quantities when the Ising model is simulated with the Wolff algorithm. The second test is based on random walks, and the third on blocks of [ital n] successive numbers. We apply the tests to show that recent errors in high precision Ising simulations using generalized feedback shift register algorithms are due to short range correlations in random number sequences.
Self-assembly of Random Copolymers
Li, Longyu; Raghupathi, Kishore; Song, Cunfeng; Prasad, Priyaa; Thayumanavan, S.
2014-01-01
Self-assembly of random copolymers has attracted considerable attention recently. In this feature article, we highlight the use of random copolymers to prepare nanostructures with different morphologies and to prepare nanomaterials that are responsive to single or multiple stimuli. The synthesis of single-chain nanoparticles and their potential applications from random copolymers are also discussed in some detail. We aim to draw more attention to these easily accessible copolymers, which are likely to play an important role in translational polymer research. PMID:25036552
Spatio-temporal optical random number generator.
Stipčević, M; Bowers, J E
2015-05-01
We present a first random number generator (RNG) which simultaneously uses independent spatial and temporal quantum randomness contained in an optical system. Availability of the two independent sources of entropy makes the RNG resilient to hardware failure and signal injection attacks. We show that the deviation from randomness of the generated numbers can be estimated quickly from simple measurements thus eliminating the need for usual time-consuming statistical testing of the output data. As a confirmation it is demonstrated that generated numbers pass NIST Statistical test suite. PMID:25969254
Coherent random lasing in diffusive resonant media
Uppu, Ravitej; Tiwari, Anjani Kumar; Mujumdar, Sushil
2011-10-03
We investigate diffusive propagation of light and consequent random lasing in a medium comprising resonant spherical scatterers. A Monte-Carlo calculation based on photon propagation via three-dimensional random walks is employed to obtain the dwell-times of light in the system. We compare the inter-scatterer and intra-scatterer dwell-times for representative resonant and non-resonant wavelengths. Our results show that more efficient random lasing, with intense coherent modes, is obtained when the gain is present inside the scatterers. Further, a larger reduction in frequency fluctuations is achieved by the system with intra-scatterer gain.
Plasmonic random lasing in polymer fiber.
Li, Songtao; Wang, Li; Zhai, Tianrui; Chen, Li; Wang, Meng; Wang, Yimeng; Tong, Fei; Wang, Yonglu; Zhang, Xinping
2016-06-13
A random fiber laser is achieved based on the plasmonic feedback mechanism, which is constructed by first siphoning the polymer solution doped with silver nanoparticles into a 300-μm capillary tube and then evaporating the solvent. Strong amplification of the radiation can be obtained by employing the variable gain region, the fiber waveguide scheme and three-dimensional plasmonic feedback provided by the silver nanoparticles. Low-threshold directional random lasing is observed in the polymer fiber. This simple and straightforward approach facilitates the investigation of plasmonic random fiber lasers. PMID:27410294
Cavity approach to the random solid state.
Mao, Xiaoming; Goldbart, Paul M; Mézard, Marc; Weigt, Martin
2005-09-30
The cavity approach is used to address the physical properties of random solids in equilibrium. Particular attention is paid to the fraction of localized particles and the distribution of localization lengths characterizing their thermal motion. This approach is of relevance to a wide class of random solids, including rubbery media (formed via the vulcanization of polymer fluids) and chemical gels (formed by the random covalent bonding of fluids of atoms or small molecules). The cavity approach confirms results that have been obtained previously via replica mean-field theory, doing so in a way that sheds new light on their physical origin. PMID:16241698
2013-01-01
Background Anxiety disorders affect approximately 10% to 20% of young people, can be enduring if left untreated, and have been associated with psychopathology in later life. Despite this, there is a paucity of empirical research to assist clinicians in determining appropriate treatment options. We describe a protocol for a randomized controlled trial in which we will examine the effectiveness of a group-based Acceptance and Commitment Therapy program for children and adolescents with a primary diagnosis of anxiety disorder. For the adolescent participants we will also evaluate the elements of the intervention that act as mechanisms for change. Methods/design We will recruit 150 young people (90 children and 60 adolescents) diagnosed with an anxiety disorder and their parent or caregiver. After completion of baseline assessment, participants will be randomized to one of three conditions (Acceptance and Commitment Therapy, Cognitive Behavior Therapy or waitlist control). Those in the Acceptance and Commitment Therapy and Cognitive Behavior Therapy groups will receive 10 × 1.5 hour weekly group-therapy sessions using a manualized treatment program, in accordance with the relevant therapy, to be delivered by psychologists. Controls will receive the Cognitive Behavior Therapy program after 10 weeks waitlisted. Repeated measures will be taken immediately post-therapy and at three months after therapy cessation. Discussion To the best of our knowledge, this study will be the largest trial of Acceptance and Commitment Therapy in the treatment of children and young people to date. It will provide comprehensive data on the use of Acceptance and Commitment Therapy for anxiety disorders and will offer evidence for mechanisms involved in the process of change. Furthermore, additional data will be obtained for the use of Cognitive Behavior Therapy in this population and this research will illustrate the comparative effectiveness of these two interventions, which are currently
Epidemic spreading driven by biased random walks
NASA Astrophysics Data System (ADS)
Pu, Cunlai; Li, Siyuan; Yang, Jian
2015-08-01
Random walk is one of the basic mechanisms of many network-related applications. In this paper, we study the dynamics of epidemic spreading driven by biased random walks in complex networks. In our epidemic model, infected nodes send out infection packets by biased random walks to their neighbor nodes, and this causes the infection of susceptible nodes that receive the packets. Infected nodes recover from the infection at a constant rate λ, and will not be infected again after recovery. We obtain the largest instantaneous number of infected nodes and the largest number of ever-infected nodes respectively, by tuning the parameter α of the biased random walks. Simulation results on model and real-world networks show that spread of the epidemic becomes intense and widespread with increase of either delivery capacity of infected nodes, average node degree, or homogeneity of node degree distribution.
Parametric models for samples of random functions
Grigoriu, M.
2015-09-15
A new class of parametric models, referred to as sample parametric models, is developed for random elements that match sample rather than the first two moments and/or other global properties of these elements. The models can be used to characterize, e.g., material properties at small scale in which case their samples represent microstructures of material specimens selected at random from a population. The samples of the proposed models are elements of finite-dimensional vector spaces spanned by samples, eigenfunctions of Karhunen–Loève (KL) representations, or modes of singular value decompositions (SVDs). The implementation of sample parametric models requires knowledge of the probability laws of target random elements. Numerical examples including stochastic processes and random fields are used to demonstrate the construction of sample parametric models, assess their accuracy, and illustrate how these models can be used to solve efficiently stochastic equations.
Computer Challenges: Random Walks in the Classroom.
ERIC Educational Resources Information Center
Gamble, Andy
1982-01-01
Discusses a short computer program used in teaching the random (RND) function in the BASIC programming language. Focuses on the mathematical concepts involved in the program related to elementary probability. (JN)
Products of Independent Elliptic Random Matrices
NASA Astrophysics Data System (ADS)
O'Rourke, Sean; Renfrew, David; Soshnikov, Alexander; Vu, Van
2015-07-01
For fixed , we study the product of independent elliptic random matrices as tends to infinity. Our main result shows that the empirical spectral distribution of the product converges, with probability , to the -th power of the circular law, regardless of the joint distribution of the mirror entries in each matrix. This leads to a new kind of universality phenomenon: the limit law for the product of independent random matrices is independent of the limit laws for the individual matrices themselves. Our result also generalizes earlier results of Götze-Tikhomirov (On the asymptotic spectrum of products of independent random matrices, available at http://arxiv.org/abs/1012.2710) and O'Rourke-Soshnikov (J Probab 16(81):2219-2245, 2011) concerning the product of independent iid random matrices.
Stochastic structure formation in random media
NASA Astrophysics Data System (ADS)
Klyatskin, V. I.
2016-01-01
Stochastic structure formation in random media is considered using examples of elementary dynamical systems related to the two-dimensional geophysical fluid dynamics (Gaussian random fields) and to stochastically excited dynamical systems described by partial differential equations (lognormal random fields). In the latter case, spatial structures (clusters) may form with a probability of one in almost every system realization due to rare events happening with vanishing probability. Problems involving stochastic parametric excitation occur in fluid dynamics, magnetohydrodynamics, plasma physics, astrophysics, and radiophysics. A more complicated stochastic problem dealing with anomalous structures on the sea surface (rogue waves) is also considered, where the random Gaussian generation of sea surface roughness is accompanied by parametric excitation.
Random motion analysis of flexible satellite structures
NASA Technical Reports Server (NTRS)
Huang, T. C.; Das, A.
1978-01-01
A singular perturbation formulation is used to study the responses of a flexible satellite when random measurement errors can occur. The random variables, at different instants of time, are assumed to be uncorrelated. Procedures for obtaining maxima and minima are described, and a variation of the linear method is developed for the formal solution of the two-point boundary-value problems represented by the variational equations. Random and deterministic solutions for the structural position coordinates are studied, and an analytic algorithm for treating the force equation of motion is developed. Since the random system indicated by the variational equation will always be asymptotically unstable, any analysis of stability must be based on the deterministic system.
Random Variables: Simulations and Surprising Connections.
ERIC Educational Resources Information Center
Quinn, Robert J.; Tomlinson, Stephen
1999-01-01
Features activities for advanced second-year algebra students in grades 11 and 12. Introduces three random variables and considers an empirical and theoretical probability for each. Uses coins, regular dice, decahedral dice, and calculators. (ASK)
A random search algorithm for laboratory computers
NASA Technical Reports Server (NTRS)
Curry, R. E.
1975-01-01
The small laboratory computer is ideal for experimental control and data acquisition. Postexperimental data processing is often performed on large computers because of the availability of sophisticated programs, but costs and data compatibility are negative factors. Parameter optimization can be accomplished on the small computer, offering ease of programming, data compatibility, and low cost. A previously proposed random-search algorithm ('random creep') was found to be very slow in convergence. A method is proposed (the 'random leap' algorithm) which starts in a global search mode and automatically adjusts step size to speed convergence. A FORTRAN executive program for the random-leap algorithm is presented which calls a user-supplied function subroutine. An example of a function subroutine is given which calculates maximum-likelihood estimates of receiver operating-characteristic parameters from binary response data. Other applications in parameter estimation, generalized least squares, and matrix inversion are discussed.
Fast generation of sparse random kernel graphs
Hagberg, Aric; Lemons, Nathan; Du, Wen -Bo
2015-09-10
The development of kernel-based inhomogeneous random graphs has provided models that are flexible enough to capture many observed characteristics of real networks, and that are also mathematically tractable. We specify a class of inhomogeneous random graph models, called random kernel graphs, that produces sparse graphs with tunable graph properties, and we develop an efficient generation algorithm to sample random instances from this model. As real-world networks are usually large, it is essential that the run-time of generation algorithms scales better than quadratically in the number of vertices n. We show that for many practical kernels our algorithm runs in timemore » at most ο(n(logn)²). As an example, we show how to generate samples of power-law degree distribution graphs with tunable assortativity.« less
Fast generation of sparse random kernel graphs
Hagberg, Aric; Lemons, Nathan; Du, Wen -Bo
2015-09-10
The development of kernel-based inhomogeneous random graphs has provided models that are flexible enough to capture many observed characteristics of real networks, and that are also mathematically tractable. We specify a class of inhomogeneous random graph models, called random kernel graphs, that produces sparse graphs with tunable graph properties, and we develop an efficient generation algorithm to sample random instances from this model. As real-world networks are usually large, it is essential that the run-time of generation algorithms scales better than quadratically in the number of vertices n. We show that for many practical kernels our algorithm runs in time at most ο(n(logn)²). As an example, we show how to generate samples of power-law degree distribution graphs with tunable assortativity.
Fast Generation of Sparse Random Kernel Graphs
2015-01-01
The development of kernel-based inhomogeneous random graphs has provided models that are flexible enough to capture many observed characteristics of real networks, and that are also mathematically tractable. We specify a class of inhomogeneous random graph models, called random kernel graphs, that produces sparse graphs with tunable graph properties, and we develop an efficient generation algorithm to sample random instances from this model. As real-world networks are usually large, it is essential that the run-time of generation algorithms scales better than quadratically in the number of vertices n. We show that for many practical kernels our algorithm runs in time at most 𝒪(n(logn)2). As a practical example we show how to generate samples of power-law degree distribution graphs with tunable assortativity. PMID:26356296
Nonlocal soliton scattering in random potentials
NASA Astrophysics Data System (ADS)
Piccardi, Armando; Residori, Stefania; Assanto, Gaetano
2016-07-01
We experimentally investigate the transport behaviour of nonlocal spatial optical solitons when launched in and interacting with propagation-invariant random potentials. The solitons are generated in nematic liquid crystals; the randomness is created by suitably engineered illumination of planar voltage-biased cells equipped with a photosensitive wall. We find that the fluctuations follow a super-diffusive trend, with the mean square displacement lowering for decreasing spatial correlation of the noise.
Numerical analysis of randomly forced glycolitic oscillations
Ryashko, Lev
2015-03-10
Randomly forced glycolytic oscillations in Higgins model are studied both numerically and analytically. Numerical analysis is based on the direct simulation of the solutions of stochastic system. Non-uniformity of the stochastic bundle along the deterministic cycle is shown. For the analytical investigation of the randomly forced Higgins model, the stochastic sensitivity function technique and confidence domains method are applied. Results of the influence of additive noise on the cycle of this model are given.
Quasi-random array imaging collimator
Fenimore, E.E.
1980-08-20
A hexagonally shaped quasi-random no-two-holes-touching imaging collimator. The quasi-random array imaging collimator eliminates contamination from small angle off-axis rays by using a no-two-holes-touching pattern which simultaneously provides for a self-supporting array increasing throughput by elimination of a substrate. The present invention also provides maximum throughput using hexagonally shaped holes in a hexagonal lattice pattern for diffraction limited applications. Mosaicking is also disclosed for reducing fabrication effort.
Managing numerical errors in random sequential adsorption
NASA Astrophysics Data System (ADS)
Cieśla, Michał; Nowak, Aleksandra
2016-09-01
Aim of this study is to examine the influence of a finite surface size and a finite simulation time on a packing fraction estimated using random sequential adsorption simulations. The goal of particular interest is providing hints on simulation setup to achieve desired level of accuracy. The analysis is based on properties of saturated random packing of disks on continuous and flat surfaces of different sizes.
Lighting up microscopy with random Raman lasing
NASA Astrophysics Data System (ADS)
Hokr, Brett H.; Nodurft, Dawson T.; Thompson, Jonathan V.; Bixler, Joel N.; Noojin, Gary D.; Redding, Brandon; Thomas, Robert J.; Cao, Hui; Rockwell, Benjamin A.; Scully, Marlan O.; Yakovlev, Vladislav V.
2016-03-01
Wide-field microscopy, where full images are obtained simultaneously, is limited by the power available from speckle-free light sources. Currently, the vast majority of wide-field microscopes use either mercury arc lamps, or LEDs as the illumination source. The power available from these sources limits wide-field fluorescent microscopy to tens of microseconds temporal resolution. Lasers, while capable of producing high power and short pulses, have high spatial coherence. This leads to the formation of laser speckle that makes such sources unsuitable for wide-field imaging applications. Random Raman lasers offer the best of both worlds by producing laser-like intensities, short, nanosecond-scale, pulses, and low spatial coherence, speckle-free, output. These qualities combine to make random Raman lasers 4 orders of magnitude brighter than traditional wide-field microscopy light sources. Furthermore, the unique properties of random Raman lasers make possible the entirely new possibilities of wide-field fluorescence lifetime imaging or wide-field Raman microscopy. We will introduce the relevant physics that give rise to the unique properties of random Raman lasing, and demonstrate early proof of principle results demonstrating random Raman lasing emission being used as an imaging light source. Finally, we will discuss future directions and elucidate the benefits of using random Raman lasers as a wide-field microscopy light source.
Maximally nonlocal theories cannot be maximally random.
de la Torre, Gonzalo; Hoban, Matty J; Dhara, Chirag; Prettico, Giuseppe; Acín, Antonio
2015-04-24
Correlations that violate a Bell inequality are said to be nonlocal; i.e., they do not admit a local and deterministic explanation. Great effort has been devoted to study how the amount of nonlocality (as measured by a Bell inequality violation) serves to quantify the amount of randomness present in observed correlations. In this work we reverse this research program and ask what do the randomness certification capabilities of a theory tell us about the nonlocality of that theory. We find that, contrary to initial intuition, maximal randomness certification cannot occur in maximally nonlocal theories. We go on and show that quantum theory, in contrast, permits certification of maximal randomness in all dichotomic scenarios. We hence pose the question of whether quantum theory is optimal for randomness; i.e., is it the most nonlocal theory that allows maximal randomness certification? We answer this question in the negative by identifying a larger-than-quantum set of correlations capable of this feat. Not only are these results relevant to understanding quantum mechanics' fundamental features, but also put fundamental restrictions on device-independent protocols based on the no-signaling principle. PMID:25955039
Contextuality is about identity of random variables
NASA Astrophysics Data System (ADS)
Dzhafarov, Ehtibar N.; Kujala, Janne V.
2014-12-01
Contextual situations are those in which seemingly ‘the same’ random variable changes its identity depending on the conditions under which it is recorded. Such a change of identity is observed whenever the assumption that the variable is one and the same under different conditions leads to contradictions when one considers its joint distribution with other random variables (this is the essence of all Bell-type theorems). In our Contextuality-by-Default approach, instead of asking why or how the conditions force ‘one and the same’ random variable to change ‘its’ identity, any two random variables recorded under different conditions are considered different ‘automatically.’ They are never the same, nor are they jointly distributed, but one can always impose on them a joint distribution (probabilistic coupling). The special situations when there is a coupling in which these random variables are equal with probability 1 are considered noncontextual. Contextuality means that such couplings do not exist. We argue that the determination of the identity of random variables by conditions under which they are recorded is not a causal relationship and cannot violate laws of physics.
Randomization Does Not Help Much, Comparability Does
Saint-Mont, Uwe
2015-01-01
According to R.A. Fisher, randomization “relieves the experimenter from the anxiety of considering innumerable causes by which the data may be disturbed.” Since, in particular, it is said to control for known and unknown nuisance factors that may considerably challenge the validity of a result, it has become very popular. This contribution challenges the received view. First, looking for quantitative support, we study a number of straightforward, mathematically simple models. They all demonstrate that the optimism surrounding randomization is questionable: In small to medium-sized samples, random allocation of units to treatments typically yields a considerable imbalance between the groups, i.e., confounding due to randomization is the rule rather than the exception. In the second part of this contribution, the reasoning is extended to a number of traditional arguments in favour of randomization. This discussion is rather non-technical, and sometimes touches on the rather fundamental Frequentist/Bayesian debate. However, the result of this analysis turns out to be quite similar: While the contribution of randomization remains doubtful, comparability contributes much to a compelling conclusion. Summing up, classical experimentation based on sound background theory and the systematic construction of exchangeable groups seems to be advisable. PMID:26193621
Constructing random matrices to represent real ecosystems.
James, Alex; Plank, Michael J; Rossberg, Axel G; Beecham, Jonathan; Emmerson, Mark; Pitchford, Jonathan W
2015-05-01
Models of complex systems with n components typically have order n(2) parameters because each component can potentially interact with every other. When it is impractical to measure these parameters, one may choose random parameter values and study the emergent statistical properties at the system level. Many influential results in theoretical ecology have been derived from two key assumptions: that species interact with random partners at random intensities and that intraspecific competition is comparable between species. Under these assumptions, community dynamics can be described by a community matrix that is often amenable to mathematical analysis. We combine empirical data with mathematical theory to show that both of these assumptions lead to results that must be interpreted with caution. We examine 21 empirically derived community matrices constructed using three established, independent methods. The empirically derived systems are more stable by orders of magnitude than results from random matrices. This consistent disparity is not explained by existing results on predator-prey interactions. We investigate the key properties of empirical community matrices that distinguish them from random matrices. We show that network topology is less important than the relationship between a species' trophic position within the food web and its interaction strengths. We identify key features of empirical networks that must be preserved if random matrix models are to capture the features of real ecosystems. PMID:25905510
Sunspot random walk and 22-year variation
Love, Jeffrey J.; Rigler, E. Joshua
2012-01-01
We examine two stochastic models for consistency with observed long-term secular trends in sunspot number and a faint, but semi-persistent, 22-yr signal: (1) a null hypothesis, a simple one-parameter random-walk model of sunspot-number cycle-to-cycle change, and, (2) an alternative hypothesis, a two-parameter random-walk model with an imposed 22-yr alternating amplitude. The observed secular trend in sunspots, seen from solar cycle 5 to 23, would not be an unlikely result of the accumulation of multiple random-walk steps. Statistical tests show that a 22-yr signal can be resolved in historical sunspot data; that is, the probability is low that it would be realized from random data. On the other hand, the 22-yr signal has a small amplitude compared to random variation, and so it has a relatively small effect on sunspot predictions. Many published predictions for cycle 24 sunspots fall within the dispersion of previous cycle-to-cycle sunspot differences. The probability is low that the Sun will, with the accumulation of random steps over the next few cycles, walk down to a Dalton-like minimum. Our models support published interpretations of sunspot secular variation and 22-yr variation resulting from cycle-to-cycle accumulation of dynamo-generated magnetic energy.
Metapopulation Persistence in Random Fragmented Landscapes
Grilli, Jacopo; Barabás, György; Allesina, Stefano
2015-01-01
Habitat destruction and land use change are making the world in which natural populations live increasingly fragmented, often leading to local extinctions. Although local populations might undergo extinction, a metapopulation may still be viable as long as patches of suitable habitat are connected by dispersal, so that empty patches can be recolonized. Thus far, metapopulations models have either taken a mean-field approach, or have modeled empirically-based, realistic landscapes. Here we show that an intermediate level of complexity between these two extremes is to consider random landscapes, in which the patches of suitable habitat are randomly arranged in an area (or volume). Using methods borrowed from the mathematics of Random Geometric Graphs and Euclidean Random Matrices, we derive a simple, analytic criterion for the persistence of the metapopulation in random fragmented landscapes. Our results show how the density of patches, the variability in their value, the shape of the dispersal kernel, and the dimensionality of the landscape all contribute to determining the fate of the metapopulation. Using this framework, we derive sufficient conditions for the population to be spatially localized, such that spatially confined clusters of patches act as a source of dispersal for the whole landscape. Finally, we show that a regular arrangement of the patches is always detrimental for persistence, compared to the random arrangement of the patches. Given the strong parallel between metapopulation models and contact processes, our results are also applicable to models of disease spread on spatial networks. PMID:25993004
Navigability of interconnected networks under random failures
De Domenico, Manlio; Solé-Ribalta, Albert; Gómez, Sergio; Arenas, Alex
2014-01-01
Assessing the navigability of interconnected networks (transporting information, people, or goods) under eventual random failures is of utmost importance to design and protect critical infrastructures. Random walks are a good proxy to determine this navigability, specifically the coverage time of random walks, which is a measure of the dynamical functionality of the network. Here, we introduce the theoretical tools required to describe random walks in interconnected networks accounting for structure and dynamics inherent to real systems. We develop an analytical approach for the covering time of random walks in interconnected networks and compare it with extensive Monte Carlo simulations. Generally speaking, interconnected networks are more resilient to random failures than their individual layers per se, and we are able to quantify this effect. As an application––which we illustrate by considering the public transport of London––we show how the efficiency in exploring the multiplex critically depends on layers’ topology, interconnection strengths, and walk strategy. Our findings are corroborated by data-driven simulations, where the empirical distribution of check-ins and checks-out is considered and passengers travel along fastest paths in a network affected by real disruptions. These findings are fundamental for further development of searching and navigability strategies in real interconnected systems. PMID:24912174
Soft random solids and their heterogeneous elasticity.
Mao, Xiaoming; Goldbart, Paul M; Xing, Xiangjun; Zippelius, Annette
2009-09-01
Spatial heterogeneity in the elastic properties of soft random solids is examined via vulcanization theory. The spatial heterogeneity in the structure of soft random solids is a result of the fluctuations locked-in at their synthesis, which also brings heterogeneity in their elastic properties. Vulcanization theory studies semimicroscopic models of random-solid-forming systems and applies replica field theory to deal with their quenched disorder and thermal fluctuations. The elastic deformations of soft random solids are argued to be described by the Goldstone sector of fluctuations contained in vulcanization theory, associated with a subtle form of spontaneous symmetry breaking that is associated with the liquid-to-random-solid transition. The resulting free energy of this Goldstone sector can be reinterpreted as arising from a phenomenological description of an elastic medium with quenched disorder. Through this comparison, we arrive at the statistics of the quenched disorder of the elasticity of soft random solids in terms of residual stress and Lamé-coefficient fields. In particular, there are large residual stresses in the equilibrium reference state, and the disorder correlators involving the residual stress are found to be long ranged and governed by a universal parameter that also gives the mean shear modulus. PMID:19905095
Soft random solids and their heterogeneous elasticity
NASA Astrophysics Data System (ADS)
Mao, Xiaoming; Goldbart, Paul M.; Xing, Xiangjun; Zippelius, Annette
2009-09-01
Spatial heterogeneity in the elastic properties of soft random solids is examined via vulcanization theory. The spatial heterogeneity in the structure of soft random solids is a result of the fluctuations locked-in at their synthesis, which also brings heterogeneity in their elastic properties. Vulcanization theory studies semimicroscopic models of random-solid-forming systems and applies replica field theory to deal with their quenched disorder and thermal fluctuations. The elastic deformations of soft random solids are argued to be described by the Goldstone sector of fluctuations contained in vulcanization theory, associated with a subtle form of spontaneous symmetry breaking that is associated with the liquid-to-random-solid transition. The resulting free energy of this Goldstone sector can be reinterpreted as arising from a phenomenological description of an elastic medium with quenched disorder. Through this comparison, we arrive at the statistics of the quenched disorder of the elasticity of soft random solids in terms of residual stress and Lamé-coefficient fields. In particular, there are large residual stresses in the equilibrium reference state, and the disorder correlators involving the residual stress are found to be long ranged and governed by a universal parameter that also gives the mean shear modulus.
Sunspot random walk and 22-year variation
NASA Astrophysics Data System (ADS)
Love, Jeffrey J.; Rigler, E. Joshua
2012-05-01
We examine two stochastic models for consistency with observed long-term secular trends in sunspot number and a faint, but semi-persistent, 22-yr signal: (1) a null hypothesis, a simple one-parameter log-normal random-walk model of sunspot-number cycle-to-cycle change, and, (2) an alternative hypothesis, a two-parameter random-walk model with an imposed 22-yr alternating amplitude. The observed secular trend in sunspots, seen from solar cycle 5 to 23, would not be an unlikely result of the accumulation of multiple random-walk steps. Statistical tests show that a 22-yr signal can be resolved in historical sunspot data; that is, the probability is low that it would be realized from random data. On the other hand, the 22-yr signal has a small amplitude compared to random variation, and so it has a relatively small effect on sunspot predictions. Many published predictions for cycle 24 sunspots fall within the dispersion of previous cycle-to-cycle sunspot differences. The probability is low that the Sun will, with the accumulation of random steps over the next few cycles, walk down to a Dalton-like minimum. Our models support published interpretations of sunspot secular variation and 22-yr variation resulting from cycle-to-cycle accumulation of dynamo-generated magnetic energy.
Rozental, Alexander
2013-01-01
Background Procrastination, to voluntarily delay an intended course of action despite expecting to be worse-off for the delay, is a persistent behavior pattern that can cause major psychological suffering. Approximately half of the student population and 15%-20% of the adult population are presumed having substantial difficulties due to chronic and recurrent procrastination in their everyday life. However, preconceptions and a lack of knowledge restrict the availability of adequate care. Cognitive behavior therapy (CBT) is often considered treatment of choice, although no clinical trials have previously been carried out. Objective The aim of this study will be to test the effects of CBT for procrastination, and to investigate whether it can be delivered via the Internet. Methods Participants will be recruited through advertisements in newspapers, other media, and the Internet. Only people residing in Sweden with access to the Internet and suffering from procrastination will be included in the study. A randomized controlled trial with a sample size of 150 participants divided into three groups will be utilized. The treatment group will consist of 50 participants receiving a 10-week CBT intervention with weekly therapist contact. A second treatment group with 50 participants receiving the same treatment, but without therapist contact, will also be employed. The intervention being used for the current study is derived from a self-help book for procrastination written by one of the authors (AR). It includes several CBT techniques commonly used for the treatment of procrastination (eg, behavioral activation, behavioral experiments, stimulus control, and psychoeducation on motivation and different work methods). A control group consisting of 50 participants on a wait-list control will be used to evaluate the effects of the CBT intervention. For ethical reasons, the participants in the control group will gain access to the same intervention following the 10-week treatment
Randomized Algorithms for Matrices and Data
NASA Astrophysics Data System (ADS)
Mahoney, Michael W.
2012-03-01
This chapter reviews recent work on randomized matrix algorithms. By “randomized matrix algorithms,” we refer to a class of recently developed random sampling and random projection algorithms for ubiquitous linear algebra problems such as least-squares (LS) regression and low-rank matrix approximation. These developments have been driven by applications in large-scale data analysis—applications which place very different demands on matrices than traditional scientific computing applications. Thus, in this review, we will focus on highlighting the simplicity and generality of several core ideas that underlie the usefulness of these randomized algorithms in scientific applications such as genetics (where these algorithms have already been applied) and astronomy (where, hopefully, in part due to this review they will soon be applied). The work we will review here had its origins within theoretical computer science (TCS). An important feature in the use of randomized algorithms in TCS more generally is that one must identify and then algorithmically deal with relevant “nonuniformity structure” in the data. For the randomized matrix algorithms to be reviewed here and that have proven useful recently in numerical linear algebra (NLA) and large-scale data analysis applications, the relevant nonuniformity structure is defined by the so-called statistical leverage scores. Defined more precisely below, these leverage scores are basically the diagonal elements of the projection matrix onto the dominant part of the spectrum of the input matrix. As such, they have a long history in statistical data analysis, where they have been used for outlier detection in regression diagnostics. More generally, these scores often have a very natural interpretation in terms of the data and processes generating the data. For example, they can be interpreted in terms of the leverage or influence that a given data point has on, say, the best low-rank matrix approximation; and this
Statistical properties of randomization in clinical trials.
Lachin, J M
1988-12-01
This is the first of five articles on the properties of different randomization procedures used in clinical trials. This paper presents definitions and discussions of the statistical properties of randomization procedures as they relate to both the design of a clinical trial and the statistical analysis of trial results. The subsequent papers consider, respectively, the properties of simple (complete), permuted-block (i.e., blocked), and urn (adaptive biased-coin) randomization. The properties described herein are the probabilities of treatment imbalances and the potential effects on the power of statistical tests; the permutational basis for statistical tests; and the potential for experimental biases in the assessment of treatment effects due either to the predictability of the random allocations (selection bias) or the susceptibility of the randomization procedure to covariate imbalances (accidental bias). For most randomization procedures, the probabilities of overall treatment imbalances are readily computed, even when a stratified randomization is used. This is important because treatment imbalance may affect statistical power. It is shown, however, that treatment imbalance must be substantial before power is more than trivially affected. The differences between a population versus a permutation model as a basis for a statistical test are reviewed. It is argued that a population model can only be invoked in clinical trials as an untestable assumption, rather than being formally based on sampling at random from a population. On the other hand, a permutational analysis based on the randomization actually employed requires no assumptions regarding the origin of the samples of patients studied. The large sample permutational distribution of the family of linear rank tests is described as a basis for easily conducting a variety of permutation tests. Subgroup (stratified) analyses, analyses when some data are missing, and regression model analyses are also
Wang, Fei; Toselli, Italo; Korotkova, Olga
2016-02-10
An optical system consisting of a laser source and two independent consecutive phase-only spatial light modulators (SLMs) is shown to accurately simulate a generated random beam (first SLM) after interaction with a stationary random medium (second SLM). To illustrate the range of possibilities, a recently introduced class of random optical frames is examined on propagation in free space and several weak turbulent channels with Kolmogorov and non-Kolmogorov statistics. PMID:26906385
Helmhout, Pieter H; Harts, Chris C; Staal, J Bart; de Bie, Rob A
2004-01-01
Background Researchers from the Royal Netherlands Army are studying the potential of isolated lumbar extensor training in low back pain in their working population. Currently, a randomized controlled trial is carried out in five military health centers in The Netherlands and Germany, in which a 10-week program of not more than 2 training sessions (10–15 minutes) per week is studied in soldiers with nonspecific low back pain for more than 4 weeks. The purpose of the study is to investigate the efficacy of this 'minimal intervention program', compared to usual care. Moreover, attempts are made to identify subgroups of different responders to the intervention. Methods Besides a baseline measurement, follow-up data are gathered at two short-term intervals (5 and 10 weeks after randomization) and two long-term intervals (6 months and one year after the end of the intervention), respectively. At every test moment, participants fill out a compound questionnaire on a stand-alone PC, and they undergo an isometric back strength measurement on a lower back machine. Primary outcome measures in this study are: self-assessed degree of complaints and degree of handicap in daily activities due to back pain. In addition, our secondary measurements focus on: fear of movement/(re-) injury, mental and social health perception, individual back extension strength, and satisfaction of the patient with the treatment perceived. Finally, we assess a number of potential prognostic factors: demographic and job characteristics, overall health, the degree of physical activity, and the attitudes and beliefs of the physiotherapist towards chronic low back pain. Discussion Although a substantial number of trials have been conducted that included lumbar extension training in low back pain patients, hardly any study has emphasized a minimal intervention approach comparable to ours. For reasons of time efficiency and patient preferences, this minimal sports medicine approach of low back pain
Jakobsen, Markus D.; Sundstrup, Emil; Brandt, Mikkel; Jay, Kenneth; Aagaard, Per; Andersen, Lars L.
2015-01-01
Objectives. The present study investigates the effect of workplace- versus home-based physical exercise on muscle reflex response to sudden trunk perturbation among healthcare workers. Methods. Two hundred female healthcare workers (age: 42 [SD 11], BMI: 24 [SD 4], and pain intensity: 3.1 [SD 2.2] on a scale of 0–10) from 18 departments at three hospitals were randomized at the cluster level to 10 weeks of (1) workplace physical exercise (WORK) performed in groups during working hours for 5 × 10 minutes per week and up to 5 group-based coaching sessions on motivation for regular physical exercise, or (2) home-based physical exercise (HOME) performed during leisure time for 5 × 10 minutes per week. Mechanical and neuromuscular (EMG) response to randomly assigned unloading and loading trunk perturbations and questions of fear avoidance were assessed at baseline and 10-week follow-up. Results. No group by time interaction for the mechanical trunk response and EMG latency time was seen following the ten weeks (P = 0.17–0.75). However, both groups demonstrated within-group changes (P < 0.05) in stopping time during the loading and unloading perturbation and in stopping distance during the loading perturbation. Furthermore, EMG preactivation of the erector spinae and fear avoidance were reduced more following WORK than HOME (95% CI −2.7–−0.7 (P < 0.05) and −0.14 (−0.30 to 0.02) (P = 0.09)), respectively. WORK and HOME performed 2.2 (SD: 1.1) and 1.0 (SD: 1.2) training sessions per week, respectively. Conclusions. Although training adherence was higher following WORK compared to HOME this additional training volume did not lead to significant between-group differences in the responses to sudden trunk perturbations. However, WORK led to reduced fear avoidance and reduced muscle preactivity prior to the perturbation onset, compared with HOME. This trial is registered with Clinicaltrials.gov (NCT01921764). PMID:26583145
Khonina, Svetlana N; Golub, Ilya
2015-09-01
We show that it is possible to generate transversely random, diffraction-free/longitudinally invariant vector optical fields. The randomness in transverse polarization distribution complements a previously studied one in intensity of scalar Bessel-type beams, adding another degree of freedom to control these beams. Moreover, we show that the relative transversely random phase distribution is also conserved along the optical axis. Thus, intensity, phase, and polarization of Bessel-type beams can be transversely random/arbitrary while invariant upon propagation. Such fields may find applications in encryption/secure communications, optical trapping, etc. PMID:26368714
Random selection as a confidence building tool
Macarthur, Duncan W; Hauck, Danielle; Langner, Diana; Thron, Jonathan; Smith, Morag; Williams, Richard
2010-01-01
Any verification measurement performed on potentially classified nuclear material must satisfy two seemingly contradictory constraints. First and foremost, no classified information can be released. At the same time, the monitoring party must have confidence in the veracity of the measurement. The first concern can be addressed by performing the measurements within the host facility using instruments under the host's control. Because the data output in this measurement scenario is also under host control, it is difficult for the monitoring party to have confidence in that data. One technique for addressing this difficulty is random selection. The concept of random selection can be thought of as four steps: (1) The host presents several 'identical' copies of a component or system to the monitor. (2) One (or more) of these copies is randomly chosen by the monitors for use in the measurement system. (3) Similarly, one or more is randomly chosen to be validated further at a later date in a monitor-controlled facility. (4) Because the two components or systems are identical, validation of the 'validation copy' is equivalent to validation of the measurement system. This procedure sounds straightforward, but effective application may be quite difficult. Although random selection is often viewed as a panacea for confidence building, the amount of confidence generated depends on the monitor's continuity of knowledge for both validation and measurement systems. In this presentation, we will discuss the random selection technique, as well as where and how this technique might be applied to generate maximum confidence. In addition, we will discuss the role of modular measurement-system design in facilitating random selection and describe a simple modular measurement system incorporating six small {sup 3}He neutron detectors and a single high-purity germanium gamma detector.
Randomized central limit theorems: A unified theory
NASA Astrophysics Data System (ADS)
Eliazar, Iddo; Klafter, Joseph
2010-08-01
The central limit theorems (CLTs) characterize the macroscopic statistical behavior of large ensembles of independent and identically distributed random variables. The CLTs assert that the universal probability laws governing ensembles’ aggregate statistics are either Gaussian or Lévy, and that the universal probability laws governing ensembles’ extreme statistics are Fréchet, Weibull, or Gumbel. The scaling schemes underlying the CLTs are deterministic—scaling all ensemble components by a common deterministic scale. However, there are “random environment” settings in which the underlying scaling schemes are stochastic—scaling the ensemble components by different random scales. Examples of such settings include Holtsmark’s law for gravitational fields and the Stretched Exponential law for relaxation times. In this paper we establish a unified theory of randomized central limit theorems (RCLTs)—in which the deterministic CLT scaling schemes are replaced with stochastic scaling schemes—and present “randomized counterparts” to the classic CLTs. The RCLT scaling schemes are shown to be governed by Poisson processes with power-law statistics, and the RCLTs are shown to universally yield the Lévy, Fréchet, and Weibull probability laws.
Anomalous Anticipatory Responses in Networked Random Data
Nelson, Roger D.; Bancel, Peter A.
2006-10-16
We examine an 8-year archive of synchronized, parallel time series of random data from a world spanning network of physical random event generators (REGs). The archive is a publicly accessible matrix of normally distributed 200-bit sums recorded at 1 Hz which extends from August 1998 to the present. The primary question is whether these data show non-random structure associated with major events such as natural or man-made disasters, terrible accidents, or grand celebrations. Secondarily, we examine the time course of apparently correlated responses. Statistical analyses of the data reveal consistent evidence that events which strongly affect people engender small but significant effects. These include suggestions of anticipatory responses in some cases, leading to a series of specialized analyses to assess possible non-random structure preceding precisely timed events. A focused examination of data collected around the time of earthquakes with Richter magnitude 6 and greater reveals non-random structure with a number of intriguing, potentially important features. Anomalous effects in the REG data are seen only when the corresponding earthquakes occur in populated areas. No structure is found if they occur in the oceans. We infer that an important contributor to the effect is the relevance of the earthquake to humans. Epoch averaging reveals evidence for changes in the data some hours prior to the main temblor, suggestive of reverse causation.
On grey levels in random CAPTCHA generation
NASA Astrophysics Data System (ADS)
Newton, Fraser; Kouritzin, Michael A.
2011-06-01
A CAPTCHA is an automatically generated test designed to distinguish between humans and computer programs; specifically, they are designed to be easy for humans but difficult for computer programs to pass in order to prevent the abuse of resources by automated bots. They are commonly seen guarding webmail registration forms, online auction sites, and preventing brute force attacks on passwords. In the following, we address the question: How does adding a grey level to random CAPTCHA generation affect the utility of the CAPTCHA? We treat the problem of generating the random CAPTCHA as one of random field simulation: An initial state of background noise is evolved over time using Gibbs sampling and an efficient algorithm for generating correlated random variables. This approach has already been found to yield highly-readable yet difficult-to-crack CAPTCHAs. We detail how the requisite parameters for introducing grey levels are estimated and how we generate the random CAPTCHA. The resulting CAPTCHA will be evaluated in terms of human readability as well as its resistance to automated attacks in the forms of character segmentation and optical character recognition.
Plasmonic enhancement of Rhodamine dye random lasers
NASA Astrophysics Data System (ADS)
Ismail, Wan Zakiah Wan; Vo, Thanh Phong; Goldys, Ewa M.; Dawes, Judith M.
2015-08-01
We demonstrate improved characteristics in Rhodamine dye random lasers with the addition of gold nanoparticles. As a result of the strong plasmonic enhancement induced by gold nanoparticles, Rhodamine 640/gold random lasers have less than half the lasing threshold compared with Rhodamine 640/alumina random lasers in the weakly scattering regime for 10-3 M dye concentration. The optimum concentration of gold nanoparticles occurs at ~8 × 1010 cm-3, close to the transition between the weakly scattering and diffusive regimes. Rhodamine 640 has a better performance compared with Rhodamine 6G which is attributed to the greater spectral overlap of the Rhodamine 6G fluorescence spectrum with the plasmon resonance of gold, leading to an increased energy transfer and fluorescence quenching for Rhodamine 6G by gold. We also observe the contrasting trends of lasing threshold between random dye lasers incorporating dielectric and metal nanoparticles in the diffusive scattering regime. The effects of gold nanoparticles in random dye lasers are discussed in the context of the tradeoff between local field enhancement and fluorescence quenching.
A random walk approach to quantum algorithms.
Kendon, Vivien M
2006-12-15
The development of quantum algorithms based on quantum versions of random walks is placed in the context of the emerging field of quantum computing. Constructing a suitable quantum version of a random walk is not trivial; pure quantum dynamics is deterministic, so randomness only enters during the measurement phase, i.e. when converting the quantum information into classical information. The outcome of a quantum random walk is very different from the corresponding classical random walk owing to the interference between the different possible paths. The upshot is that quantum walkers find themselves further from their starting point than a classical walker on average, and this forms the basis of a quantum speed up, which can be exploited to solve problems faster. Surprisingly, the effect of making the walk slightly less than perfectly quantum can optimize the properties of the quantum walk for algorithmic applications. Looking to the future, even with a small quantum computer available, the development of quantum walk algorithms might proceed more rapidly than it has, especially for solving real problems. PMID:17090467
How chaosity and randomness control human health
NASA Astrophysics Data System (ADS)
Yulmetyev, Renat M.; Yulmetyeva, Dinara; Gafarov, Fail M.
2005-08-01
We discuss the fundamental role that chaosity and randomness play in the determination of quality and efficiency of medical treatment. The statistical parameter of non-Markovity from non-equilibrium statistical physics of condensed matters is offered as a quantitative information measure of chaosity and randomness. The role of chaosity and randomness is determined by the phenomenological property, which includes quantitative informational measures of chaosity and randomness and pathology (disease) in a covariant form. Manifestations of the statistical informational behavior of chaosity and randomness are examined while analyzing the chaotic dynamics of RR intervals from human ECG's, the electric signals of a human muscle's tremor of legs in a normal state and at Parkinson disease, the electric potentials of the human brain core from EEG's during epileptic seizure and a human hand finger tremor in Parkinson's disease. The existence of the above stated informational measure allows to introduce the quantitative factor of the quality of treatment. The above-stated examples confirm the existence of new phenomenological property, which is important not only for the decision of medical problems, but also for the analysis of the wide range of problems of physics of complex systems of life and lifeless nature.
Lasser, Robert A; Dirks, Bryan; Nasrallah, Henry; Kirsch, Courtney; Gao, Joseph; Pucci, Michael L; Knesevich, Mary A; Lindenmayer, Jean-Pierre
2013-10-01
Negative symptoms of schizophrenia (NSS), related to hypodopaminergic activity in the mesocortical pathway and prefrontal cortex, are predictive of poor outcomes and have no effective treatment. Use of dopamine-enhancing drugs (eg, psychostimulants) has been limited by potential adverse effects. This multicenter study examined lisdexamfetamine dimesylate (LDX), a d-amphetamine prodrug, as adjunctive therapy to antipsychotics in adults with clinically stable schizophrenia and predominant NSS. Outpatients with stable schizophrenia, predominant NSS, limited positive symptoms, and maintained on stable atypical antipsychotic therapy underwent a 3-week screening, 10-week open-label adjunctive LDX (20-70 mg/day), and 4-week, double-blind, randomized, placebo-controlled withdrawal. Efficacy measures included a modified Scale for the Assessment of Negative Symptoms (SANS-18) and Positive and Negative Syndrome Scale (PANSS) total and subscale scores. Ninety-two participants received open-label LDX; 69 received double-blind therapy with placebo (n=35) or LDX (n=34). At week 10 (last observation carried forward; last open-label visit), mean (95% confidence interval) change in SANS-18 scores was -12.9 (-15.0, -10.8; P<0.0001). At week 10, 52.9% of participants demonstrated a minimum of 20% reduction from baseline in SANS-18 score. Open-label LDX was also associated with significant improvement in PANSS total and subscale scores. During the double-blind/randomized-withdrawal phase, no significant differences (change from randomization baseline) were found between placebo and LDX in SANS-18 or PANSS subscale scores. In adults with clinically stable schizophrenia, open-label LDX appeared to be associated with significant improvements in negative symptoms without positive symptom worsening. Abrupt LDX discontinuation was not associated with positive or negative symptom worsening. Confirmation with larger controlled trials is warranted. PMID:23756608
Mesoscopic description of random walks on combs.
Méndez, Vicenç; Iomin, Alexander; Campos, Daniel; Horsthemke, Werner
2015-12-01
Combs are a simple caricature of various types of natural branched structures, which belong to the category of loopless graphs and consist of a backbone and branches. We study continuous time random walks on combs and present a generic method to obtain their transport properties. The random walk along the branches may be biased, and we account for the effect of the branches by renormalizing the waiting time probability distribution function for the motion along the backbone. We analyze the overall diffusion properties along the backbone and find normal diffusion, anomalous diffusion, and stochastic localization (diffusion failure), respectively, depending on the characteristics of the continuous time random walk along the branches, and compare our analytical results with stochastic simulations. PMID:26764637
Mesoscopic description of random walks on combs
NASA Astrophysics Data System (ADS)
Méndez, Vicenç; Iomin, Alexander; Campos, Daniel; Horsthemke, Werner
2015-12-01
Combs are a simple caricature of various types of natural branched structures, which belong to the category of loopless graphs and consist of a backbone and branches. We study continuous time random walks on combs and present a generic method to obtain their transport properties. The random walk along the branches may be biased, and we account for the effect of the branches by renormalizing the waiting time probability distribution function for the motion along the backbone. We analyze the overall diffusion properties along the backbone and find normal diffusion, anomalous diffusion, and stochastic localization (diffusion failure), respectively, depending on the characteristics of the continuous time random walk along the branches, and compare our analytical results with stochastic simulations.
Competitive Facility Location with Fuzzy Random Demands
NASA Astrophysics Data System (ADS)
Uno, Takeshi; Katagiri, Hideki; Kato, Kosuke
2010-10-01
This paper proposes a new location problem of competitive facilities, e.g. shops, with uncertainty and vagueness including demands for the facilities in a plane. By representing the demands for facilities as fuzzy random variables, the location problem can be formulated as a fuzzy random programming problem. For solving the fuzzy random programming problem, first the α-level sets for fuzzy numbers are used for transforming it to a stochastic programming problem, and secondly, by using their expectations and variances, it can be reformulated to a deterministic programming problem. After showing that one of their optimal solutions can be found by solving 0-1 programming problems, their solution method is proposed by improving the tabu search algorithm with strategic oscillation. The efficiency of the proposed method is shown by applying it to numerical examples of the facility location problems.
Entanglement-assisted random access codes
Pawlowski, Marcin; Zukowski, Marek
2010-04-15
An (n,m,p) random access code (RAC) makes it possible to encode n bits in an m-bit message in such a way that a receiver of the message can guess any of the original n bits with probability p greater than (1/2). In quantum RACs (QRACs), one transmits n qubits. The full set of primitive entanglement-assisted random access codes (EARACs) is introduced, in which parties are allowed to share a two-qubit singlet. It is shown that via a concatenation of these, one can build for any n an (n,1,p) EARAC. QRACs for n>3 exist only if parties also share classical randomness. We show that EARACs outperform the best of known QRACs not only in the success probabilities but also in the amount of communication needed in the preparatory stage of the protocol. Upper bounds on the performance of EARACs are given and shown to limit also QRACs.
Operational conditions for random-number generation
NASA Astrophysics Data System (ADS)
Compagner, A.
1995-11-01
Ensemble theory is used to describe arbitrary sequences of integers, whether formed by the decimals of π or produced by a roulette or by any other means. Correlation coefficients of any range and order are defined as Fourier transforms of the ensemble weights. Competing definitions of random sequences are considered. Special attention is given to sequences of random numbers needed for Monte Carlo calculations. Different recipes for those sequences lead to correlations that vary in range and order, but the total amount of correlation is the same for all sequences of a given length (without internal periodicities). For maximum-length sequences produced by linear algorithms, most correlation coefficients are zero, but the remaining ones are of absolute value 1. In well-tempered sequences, these complete correlations are of high order or of very long range. General conditions to be obeyed by random-number generators are discussed and a qualitative method for comparing different recipes is given.
Diffusion at the Random Matrix Hard Edge
NASA Astrophysics Data System (ADS)
Ramírez, José A.; Rider, Brian
2009-06-01
We show that the limiting minimal eigenvalue distributions for a natural generalization of Gaussian sample-covariance structures (beta ensembles) are described by the spectrum of a random diffusion generator. This generator may be mapped onto the “Stochastic Bessel Operator,” introduced and studied by A. Edelman and B. Sutton in [6] where the corresponding convergence was first conjectured. Here, by a Riccati transformation, we also obtain a second diffusion description of the limiting eigenvalues in terms of hitting laws. All this pertains to the so-called hard edge of random matrix theory and sits in complement to the recent work [15] of the authors and B. Virág on the general beta random matrix soft edge. In fact, the diffusion descriptions found on both sides are used below to prove there exists a transition between the soft and hard edge laws at all values of beta.
Social patterns revealed through random matrix theory
NASA Astrophysics Data System (ADS)
Sarkar, Camellia; Jalan, Sarika
2014-11-01
Despite the tremendous advancements in the field of network theory, very few studies have taken weights in the interactions into consideration that emerge naturally in all real-world systems. Using random matrix analysis of a weighted social network, we demonstrate the profound impact of weights in interactions on emerging structural properties. The analysis reveals that randomness existing in particular time frame affects the decisions of individuals rendering them more freedom of choice in situations of financial security. While the structural organization of networks remains the same throughout all datasets, random matrix theory provides insight into the interaction pattern of individuals of the society in situations of crisis. It has also been contemplated that individual accountability in terms of weighted interactions remains as a key to success unless segregation of tasks comes into play.
The Generation of Random Equilateral Polygons
NASA Astrophysics Data System (ADS)
Alvarado, Sotero; Calvo, Jorge Alberto; Millett, Kenneth C.
2011-04-01
Freely jointed random equilateral polygons serve as a common model for polymer rings, reflecting their statistical properties under theta conditions. To generate equilateral polygons, researchers employ many procedures that have been proved, or at least are believed, to be random with respect to the natural measure on the space of polygonal knots. As a result, the random selection of equilateral polygons, as well as the statistical robustness of this selection, is of particular interest. In this research, we study the key features of four popular methods: the Polygonal Folding, the Crankshaft Rotation, the Hedgehog, and the Triangle Methods. In particular, we compare the implementation and efficacy of these procedures, especially in regards to the population distribution of polygons in the space of polygonal knots, the distribution of edge vectors, the local curvature, and the local torsion. In addition, we give a rigorous proof that the Crankshaft Rotation Method is ergodic.
Component Evolution in General Random Intersection Graphs
NASA Astrophysics Data System (ADS)
Bradonjić, Milan; Hagberg, Aric; Hengartner, Nicolas W.; Percus, Allon G.
Random intersection graphs (RIGs) are an important random structure with algorithmic applications in social networks, epidemic networks, blog readership, and wireless sensor networks. RIGs can be interpreted as a model for large randomly formed non-metric data sets. We analyze the component evolution in general RIGs, giving conditions on the existence and uniqueness of the giant component. Our techniques generalize existing methods for analysis of component evolution: we analyze survival and extinction properties of a dependent, inhomogeneous Galton-Watson branching process on general RIGs. Our analysis relies on bounding the branching processes and inherits the fundamental concepts of the study of component evolution in Erdős-Rényi graphs. The major challenge comes from the underlying structure of RIGs, which involves both a set of nodes and a set of attributes, with different probabilities associated with each attribute.
Randomized Grain Boundary Liquid Crystal Phase
NASA Astrophysics Data System (ADS)
Chen, D.; Wang, H.; Li, M.; Glaser, M.; Maclennan, J.; Clark, N.
2012-02-01
The formation of macroscopic, chiral domains, in the B4 and dark conglomerate phases, for example, is a feature of bent-core liquid crystals resulting from the interplay of chirality, molecular bend and molecular tilt. We report a new, chiral phase observed in a hockey stick-like liquid crystal molecule. This phase appears below a smectic A phase and cools to a crystal phase. TEM images of the free surface of the chiral phase show hundreds of randomly oriented smectic blocks several hundred nanometers in size, similar to those seen in the twist grain boundary (TGB) phase. However, in contrast to the TGB phase, these blocks are randomly oriented. The characteristic defects in this phase are revealed by freeze-fracture TEM images. We will show how these defects mediate the randomized orientation and discuss the intrinsic mechanism driving the formation of this phase. This work is supported by NSF MRSEC Grant DMR0820579 and NSF Grant DMR0606528.
Multiband signal reconstruction for random equivalent sampling
NASA Astrophysics Data System (ADS)
Zhao, Y. J.; Liu, C. J.
2014-10-01
The random equivalent sampling (RES) is a sampling approach that can be applied to capture high speed repetitive signals with a sampling rate that is much lower than the Nyquist rate. However, the uneven random distribution of the time interval between the excitation pulse and the signal degrades the signal reconstruction performance. For sparse multiband signal sampling, the compressed sensing (CS) based signal reconstruction algorithm can tease out the band supports with overwhelming probability and reduce the impact of uneven random distribution in RES. In this paper, the mathematical model of RES behavior is constructed in the frequency domain. Based on the constructed mathematical model, the band supports of signal can be determined. Experimental results demonstrate that, for a signal with unknown sparse multiband, the proposed CS-based signal reconstruction algorithm is feasible, and the CS reconstruction algorithm outperforms the traditional RES signal reconstruction method.
Efficient broadcast on random geometric graphs
Bradonjic, Milan; Elsasser, Robert; Friedrich, Tobias; Sauerwald, Thomas
2009-01-01
A Randon Geometric Graph (RGG) is constructed by distributing n nodes uniformly at random in the unit square and connecting two nodes if their Euclidean distance is at most r, for some prescribed r. They analyze the following randomized broadcast algorithm on RGGs. At the beginning, there is only one informed node. Then in each round, each informed node chooses a neighbor uniformly at random and informs it. They prove that this algorithm informs every node in the largest component of a RGG in {Omicron}({radical}n/r) rounds with high probability. This holds for any value of r larger than the critical value for the emergence of a giant component. In particular, the result implies that the diameter of the giant component is {Theta}({radical}n/r).
Defect Detection Using Hidden Markov Random Fields
NASA Astrophysics Data System (ADS)
Dogandžić, Aleksandar; Eua-anant, Nawanat; Zhang, Benhong
2005-04-01
We derive an approximate maximum a posteriori (MAP) method for detecting NDE defect signals using hidden Markov random fields (HMRFs). In the proposed HMRF framework, a set of spatially distributed NDE measurements is assumed to form a noisy realization of an underlying random field that has a simple structure with Markovian dependence. Here, the random field describes the defect signals to be estimated or detected. The HMRF models incorporate measurement locations into the statistical analysis, which is important in scenarios where the same defect affects measurements at multiple locations. We also discuss initialization of the proposed HMRF detector and apply to simulated eddy-current data and experimental ultrasonic C-scan data from an inspection of a cylindrical Ti 6-4 billet.
Multiband signal reconstruction for random equivalent sampling.
Zhao, Y J; Liu, C J
2014-10-01
The random equivalent sampling (RES) is a sampling approach that can be applied to capture high speed repetitive signals with a sampling rate that is much lower than the Nyquist rate. However, the uneven random distribution of the time interval between the excitation pulse and the signal degrades the signal reconstruction performance. For sparse multiband signal sampling, the compressed sensing (CS) based signal reconstruction algorithm can tease out the band supports with overwhelming probability and reduce the impact of uneven random distribution in RES. In this paper, the mathematical model of RES behavior is constructed in the frequency domain. Based on the constructed mathematical model, the band supports of signal can be determined. Experimental results demonstrate that, for a signal with unknown sparse multiband, the proposed CS-based signal reconstruction algorithm is feasible, and the CS reconstruction algorithm outperforms the traditional RES signal reconstruction method. PMID:25362458
Universal shocks in random matrix theory
NASA Astrophysics Data System (ADS)
Blaizot, Jean-Paul; Nowak, Maciej A.
2010-11-01
We link the appearance of universal kernels in random matrix ensembles to the phenomenon of shock formation in some fluid dynamical equations. Such equations are derived from Dyson’s random walks after a proper rescaling of the time. In the case of the Gaussian unitary ensemble, on which we focus in this paper, we show that the characteristics polynomials and their inverse evolve according to a viscid Burgers equation with an effective “spectral viscosity” νs=1/2N , where N is the size of the matrices. We relate the edge of the spectrum of eigenvalues to the shock that naturally appears in the Burgers equation for appropriate initial conditions, thereby suggesting a connection between the well-known microscopic universality of random matrix theory and the universal properties of the solution of the Burgers equation in the vicinity of a shock.
Random number generation from spontaneous Raman scattering
NASA Astrophysics Data System (ADS)
Collins, M. J.; Clark, A. S.; Xiong, C.; Mägi, E.; Steel, M. J.; Eggleton, B. J.
2015-10-01
We investigate the generation of random numbers via the quantum process of spontaneous Raman scattering. Spontaneous Raman photons are produced by illuminating a highly nonlinear chalcogenide glass ( As 2 S 3 ) fiber with a CW laser at a power well below the stimulated Raman threshold. Single Raman photons are collected and separated into two discrete wavelength detuning bins of equal scattering probability. The sequence of photon detection clicks is converted into a random bit stream. Postprocessing is applied to remove detector bias, resulting in a final bit rate of ˜650 kb/s. The collected random bit-sequences pass the NIST statistical test suite for one hundred 1 Mb samples, with the significance level set to α = 0.01 . The fiber is stable, robust and the high nonlinearity (compared to silica) allows for a short fiber length and low pump power favourable for real world application.
Random Numbers from a Delay Equation
NASA Astrophysics Data System (ADS)
Self, Julian; Mackey, Michael C.
2016-05-01
Delay differential equations can have "chaotic" solutions that can be used to mimic Brownian motion. Since a Brownian motion is random in its velocity, it is reasonable to think that a random number generator might be constructed from such a model. In this preliminary study, we consider one specific example of this and show that it satisfies criteria commonly employed in the testing of random number generators (from TestU01's very stringent "Big Crush" battery of tests). A technique termed digit discarding, commonly used in both this generator and physical RNGs using laser feedback systems, is discussed with regard to the maximal Lyapunov exponent. Also, we benchmark the generator to a contemporary common method: the multiple recursive generator, MRG32k3a. Although our method is about 7 times slower than MRG32k3a, there is in principle no apparent limit on the number of possible values that can be generated from the scheme we present here.
Randomized parallel speedups for list ranking
Vishkin, U.
1987-06-01
The following problem is considered: given a linked list of length n, compute the distance of each element of the linked list from the end of the list. The problem has two standard deterministic algorithms: a linear time serial algorithm, and an O(n log n)/ rho + log n) time parallel algorithm using rho processors. The authors present a randomized parallel algorithm for the problem. The algorithm is designed for an exclusive-read exclusive-write parallel random access machine (EREW PRAM). It runs almost surely in time O(n/rho + log n log* n) using rho processors. Using a recently published parallel prefix sums algorithm the list-ranking algorithm can be adapted to run on a concurrent-read concurrent-write parallel random access machine (CRCW PRAM) almost surely in time O(n/rho + log n) using rho processors.
Random lasing with spatially nonuniform gain
NASA Astrophysics Data System (ADS)
Fan, Ting; Lü, Jiantao
2016-07-01
Spatial and spectral properties of random lasing with spatially nonuniform gain were investigated in two-dimensional (2D) disordered medium. The pumping light was described by an individual electric field and coupled into the rate equations by using the polarization equation. The spatially nonuniform gain comes from the multiple scattering of this pumping light. Numerical simulation of the random system with uniform and nonuniform gain were performed both in weak and strong scattering regime. In weak scattering sample, all the lasing modes correspond to those of the passive system whether the nonuniform gain is considered. However, in strong scattering regime, new lasing modes appear with nonuniform gain as the localization area changes. Our results show that it is more accurate to describe the random lasing behavior with introducing the nonuniform gain origins from the multiple light scattering.
ERIC Educational Resources Information Center
Pinsoneault, Terry B.
2005-01-01
The ability of the Minnesota Multiphasic Personality Inventory-Adolescent (MMPI-A; J. N. Butcher et al., 1992) validity scales to detect random, partially random, and nonrandom MMPI-A protocols was investigated. Investigations included the Variable Response Inconsistency scale (VRIN), F, several potentially useful new F and VRIN subscales, and…
ERIC Educational Resources Information Center
Pinsoneault, Terry B.
2007-01-01
The ability of the Minnesota Multiphasic Personality Inventory-2 (MMPI-2; J. N. Butcher et al., 2001) validity scales to detect random, partially random, and nonrandom MMPI-2 protocols was investigated. Investigations included the Variable Response Inconsistency scale (VRIN), F, several potentially useful new F and VRIN subscales, and F-sub(b) - F…
Formulation and Application of the Hierarchical Generalized Random-Situation Random-Weight MIRID
ERIC Educational Resources Information Center
Hung, Lai-Fa
2011-01-01
The process-component approach has become quite popular for examining many psychological concepts. A typical example is the model with internal restrictions on item difficulty (MIRID) described by Butter (1994) and Butter, De Boeck, and Verhelst (1998). This study proposes a hierarchical generalized random-situation random-weight MIRID. The…
A Digitally Addressable Random-Access Image Selector and Random-Access Audio System.
ERIC Educational Resources Information Center
Bitzer, Donald L.; And Others
The requirements of PLATO IV, a computer based education system at the University of Illinois, have led to the development of an improved, digitally addressable, random access image selector and a digitally addressable, random access audio device. Both devices utilize pneumatically controlled mechanical binary adders to position the mecahnical…
Estimation of the Nonlinear Random Coefficient Model when Some Random Effects Are Separable
ERIC Educational Resources Information Center
du Toit, Stephen H. C.; Cudeck, Robert
2009-01-01
A method is presented for marginal maximum likelihood estimation of the nonlinear random coefficient model when the response function has some linear parameters. This is done by writing the marginal distribution of the repeated measures as a conditional distribution of the response given the nonlinear random effects. The resulting distribution…
ERIC Educational Resources Information Center
Bradshaw, Ceri A.; Reed, Phil
2012-01-01
In three experiments, human participants pressed the space bar on a computer keyboard to earn points on random-ratio (RR) and random-interval (RI) schedules of reinforcement. Verbalized contingency awareness (CA) for each schedule was measured after the entire task (Experiments 1 and 2), or after each RR-RI trial (Experiment 3). In all three…
A Random Variable Related to the Inversion Vector of a Partial Random Permutation
ERIC Educational Resources Information Center
Laghate, Kavita; Deshpande, M. N.
2005-01-01
In this article, we define the inversion vector of a permutation of the integers 1, 2,..., n. We set up a particular kind of permutation, called a partial random permutation. The sum of the elements of the inversion vector of such a permutation is a random variable of interest.
Randomness in post-selected events
NASA Astrophysics Data System (ADS)
Phuc Thinh, Le; de la Torre, Gonzalo; Bancal, Jean-Daniel; Pironio, Stefano; Scarani, Valerio
2016-03-01
Bell inequality violations can be used to certify private randomness for use in cryptographic applications. In photonic Bell experiments, a large amount of the data that is generated comes from no-detection events and presumably contains little randomness. This raises the question as to whether randomness can be extracted only from the smaller post-selected subset corresponding to proper detection events, instead of from the entire set of data. This could in principle be feasible without opening an analogue of the detection loophole as long as the min-entropy of the post-selected data is evaluated by taking all the information into account, including no-detection events. The possibility of extracting randomness from a short string has a practical advantage, because it reduces the computational time of the extraction. Here, we investigate the above idea in a simple scenario, where the devices and the adversary behave according to i.i.d. strategies. We show that indeed almost all the randomness is present in the pair of outcomes for which at least one detection happened. We further show that in some cases applying a pre-processing on the data can capture features that an analysis based on global frequencies only misses, thus resulting in the certification of more randomness. We then briefly consider non-i.i.d strategies and provide an explicit example of such a strategy that is more powerful than any i.i.d. one even in the asymptotic limit of infinitely many measurement rounds, something that was not reported before in the context of Bell inequalities.
Light scattering by randomly oriented spheroidal particles
NASA Technical Reports Server (NTRS)
Asano, S.; Sato, M.
1980-01-01
A study of the light scattering properties of randomly oriented, identical spheroidal particles is presented. A computation method was developed to integrate the Asano and Yamomoto solution (1975) for scattering from a homogeneous spheroid over all particle orientations; the extinction and scattering cross-sections, the asymmetry factor, and scattering matrix elements are calculated for randomly oriented prolate and oblate spheroids and compared with the calculations for spheres and laboratory measurements. The angular scattering behavior of spheroids is found to be different from that of the spheres for side scattering to backscattering directions, and prolate and oblate spheroids of the same shape parameter have similar angular scattering patterns.
Evolutionary Phase Transitions in Random Environments.
Skanata, Antun; Kussell, Edo
2016-07-15
We present analytical results for long-term growth rates of structured populations in randomly fluctuating environments, which we apply to predict how cellular response networks evolve. We show that networks which respond rapidly to a stimulus will evolve phenotypic memory exclusively under random (i.e., nonperiodic) environments. We identify the evolutionary phase diagram for simple response networks, which we show can exhibit both continuous and discontinuous transitions. Our approach enables exact analysis of diverse evolutionary systems, from viral epidemics to emergence of drug resistance. PMID:27472146
Distribution of random Cantor sets on tubes
NASA Astrophysics Data System (ADS)
Chen, Changhao
2016-04-01
We show that there exist (d-1)-Ahlfors regular compact sets E subset{R}d, d≥2 such that for any t< d-1, we have supT {H}^{d-1}(E\\cap T)/w(T)t< infty where the supremum is over all tubes T with width w(T) >0. This settles a question of T. Orponen. The sets we construct are random Cantor sets, and the method combines geometric and probabilistic estimates on the intersections of these random Cantor sets with affine subspaces.
Quantum Random Walks with General Particle States
NASA Astrophysics Data System (ADS)
Belton, Alexander C. R.
2014-06-01
A convergence theorem is obtained for quantum random walks with particles in an arbitrary normal state. This unifies and extends previous work on repeated-interactions models, including that of Attal and Pautrat (Ann Henri Poincaré 7:59-104 2006) and Belton (J Lond Math Soc 81:412-434, 2010; Commun Math Phys 300:317-329, 2010). When the random-walk generator acts by ampliation and either multiplication or conjugation by a unitary operator, it is shown that the quantum stochastic cocycle which arises in the limit is driven by a unitary process.
Attractors and Time Averages for Random Maps
NASA Astrophysics Data System (ADS)
Araujo, Vitor
2006-07-01
Considering random noise in finite dimensional parameterized families of diffeomorphisms of a compact finite dimensional boundaryless manifold M, we show the existence of time averages for almost every orbit of each point of M, imposing mild conditions on the families. Moreover these averages are given by a finite number of physical absolutely continuous stationary probability measures. We use this result to deduce that situations with infinitely many sinks and Henon-like attractors are not stable under random perturbations, e.g., Newhouse's and Colli's phenomena in the generic unfolding of a quadratic homoclinic tangency by a one-parameter family of diffeomorphisms.
Random grid fern for visual tracking
NASA Astrophysics Data System (ADS)
Cheng, Fei; Liu, Kai; Zhang, Jin; Li, YunSong
2014-05-01
Visual tracking is one of the significant research directions in computer vision. Although standard random ferns tracking method obtains a good performance for the random spatial arrangement of binary tests, the effect of the locality of image on ferns description ability are ignored and prevent them to describe the object more accurately and robustly. This paper proposes a novel spatial arrangement of binary tests to divide the bounding box into grids in order to keep more details of the image for visual tracking. Experimental results show that this method can improve tracking accuracy effectively.
Evolutionary Phase Transitions in Random Environments
NASA Astrophysics Data System (ADS)
Skanata, Antun; Kussell, Edo
2016-07-01
We present analytical results for long-term growth rates of structured populations in randomly fluctuating environments, which we apply to predict how cellular response networks evolve. We show that networks which respond rapidly to a stimulus will evolve phenotypic memory exclusively under random (i.e., nonperiodic) environments. We identify the evolutionary phase diagram for simple response networks, which we show can exhibit both continuous and discontinuous transitions. Our approach enables exact analysis of diverse evolutionary systems, from viral epidemics to emergence of drug resistance.
Entanglement of phase-random states
NASA Astrophysics Data System (ADS)
Nakata, Yoshifumi; Turner, Peter; Murao, Mio
2014-12-01
In order to study generic properties of time-evolving states by time-independent Hamiltonian dynamics, we introduce phase-random states, an ensemble of pure states with fixed amplitudes and uniformly distributed phases in a fixed basis. We compute the average amount of entanglement of phase-random states analytically, and show that the average can be extremely large when the amplitudes are equal and the basis is separable. We also study implications on Hamiltonian dynamics, in particular the realization of the canonical state in a subsystem.
Bootstrapped models for intrinsic random functions
Campbell, K.
1987-01-01
The use of intrinsic random function stochastic models as a basis for estimation in geostatistical work requires the identification of the generalized covariance function of the underlying process, and the fact that this function has to be estimated from the data introduces an additional source of error into predictions based on the model. This paper develops the sample reuse procedure called the ''bootstrap'' in the context of intrinsic random functions to obtain realistic estimates of these errors. Simulation results support the conclusion that bootstrap distributions of functionals of the process, as well as of their ''kriging variance,'' provide a reasonable picture of the variability introduced by imperfect estimation of the generalized covariance function.
Bootstrapped models for intrinsic random functions
Campbell, K.
1988-08-01
Use of intrinsic random function stochastic models as a basis for estimation in geostatistical work requires the identification of the generalized covariance function of the underlying process. The fact that this function has to be estimated from data introduces an additional source of error into predictions based on the model. This paper develops the sample reuse procedure called the bootstrap in the context of intrinsic random functions to obtain realistic estimates of these errors. Simulation results support the conclusion that bootstrap distributions of functionals of the process, as well as their kriging variance, provide a reasonable picture of variability introduced by imperfect estimation of the generalized covariance function.
A random matrix approach to credit risk.
Münnix, Michael C; Schäfer, Rudi; Guhr, Thomas
2014-01-01
We estimate generic statistical properties of a structural credit risk model by considering an ensemble of correlation matrices. This ensemble is set up by Random Matrix Theory. We demonstrate analytically that the presence of correlations severely limits the effect of diversification in a credit portfolio if the correlations are not identically zero. The existence of correlations alters the tails of the loss distribution considerably, even if their average is zero. Under the assumption of randomly fluctuating correlations, a lower bound for the estimation of the loss distribution is provided. PMID:24853864
Lie cascades and Random Dynamical Systems
NASA Astrophysics Data System (ADS)
Schertzer, D.; Lovejoy, S.; Tchiguirinskaia, I.
2009-04-01
Lie cascades were defined as a broad generalization of scalar cascades (Schertzer and Lovejoy 1995, Tchiguirinskaia and Schertzer, 1996) with the help of (infinitesimal) sub-generators being white noise vector fields on manifolds, instead of being white noise scalar fields on vector spaces. Lie cascades were thus closely related to stochastic flows on manifolds as defined by Kunita (1990). However, the concept of random dynamical systems (Arnold,1998) allows to make a closer and simpler connection between stochastic differential equations and the dynamical system approach. In this talk, we point out some relationships between Lie cascades and random dynamical systems, and therefore to dynamical system approach.
A Random Matrix Approach to Credit Risk
Guhr, Thomas
2014-01-01
We estimate generic statistical properties of a structural credit risk model by considering an ensemble of correlation matrices. This ensemble is set up by Random Matrix Theory. We demonstrate analytically that the presence of correlations severely limits the effect of diversification in a credit portfolio if the correlations are not identically zero. The existence of correlations alters the tails of the loss distribution considerably, even if their average is zero. Under the assumption of randomly fluctuating correlations, a lower bound for the estimation of the loss distribution is provided. PMID:24853864
Polarization-modulated random fiber laser
NASA Astrophysics Data System (ADS)
Wu, Han; Wang, Zinan; He, Qiheng; Fan, Mengqiu; Li, Yunqi; Sun, Wei; Zhang, Li; Li, Yi; Rao, Yunjiang
2016-05-01
In this letter, we propose and experimentally demonstrate a polarization-modulated random fiber laser (RFL) for the first time. It is found that the output power of the half-opened RFL with polarized pumping is sensitive to the state of polarization (SOP) of the Stokes light in a fiber loop acting as a mirror. By inserting a polarization switch (PSW) in the loop mirror, the state of the random lasing can be switched between on/off states, thus such a polarization-modulated RFL can generate pulsed output with high extinction ratio.
Coloring random graphs and maximizing local diversity.
Bounkong, S; van Mourik, J; Saad, D
2006-11-01
We study a variation of the graph coloring problem on random graphs of finite average connectivity. Given the number of colors, we aim to maximize the number of different colors at neighboring vertices (i.e., one edge distance) of any vertex. Two efficient algorithms, belief propagation and Walksat, are adapted to carry out this task. We present experimental results based on two types of random graphs for different system sizes and identify the critical value of the connectivity for the algorithms to find a perfect solution. The problem and the suggested algorithms have practical relevance since various applications, such as distributed storage, can be mapped onto this problem. PMID:17280022
Amnestically Induced Persistence in Random Walks
NASA Astrophysics Data System (ADS)
Cressoni, J. C.; da Silva, Marco Antonio Alves; Viswanathan, G. M.
2007-02-01
We study how the Hurst exponent α depends on the fraction f of the total time t remembered by non-Markovian random walkers that recall only the distant past. We find that otherwise nonpersistent random walkers switch to persistent behavior when inflicted with significant memory loss. Such memory losses induce the probability density function of the walker’s position to undergo a transition from Gaussian to non-Gaussian. We interpret these findings of persistence in terms of a breakdown of self-regulation mechanisms and discuss their possible relevance to some of the burdensome behavioral and psychological symptoms of Alzheimer’s disease and other dementias.
Efficient generation of large random networks
NASA Astrophysics Data System (ADS)
Batagelj, Vladimir; Brandes, Ulrik
2005-03-01
Random networks are frequently generated, for example, to investigate the effects of model parameters on network properties or to test the performance of algorithms. Recent interest in the statistics of large-scale networks sparked a growing demand for network generators that can generate large numbers of large networks quickly. We here present simple and efficient algorithms to randomly generate networks according to the most commonly used models. Their running time and space requirement is linear in the size of the network generated, and they are easily implemented.
Statistical Analysis of Random Number Generators
NASA Astrophysics Data System (ADS)
Accardi, Luigi; Gäbler, Markus
2011-01-01
In many applications, for example cryptography and Monte Carlo simulation, there is need for random numbers. Any procedure, algorithm or device which is intended to produce such is called a random number generator (RNG). What makes a good RNG? This paper gives an overview on empirical testing of the statistical properties of the sequences produced by RNGs and special software packages designed for that purpose. We also present the results of applying a particular test suite--TestU01-- to a family of RNGs currently being developed at the Centro Interdipartimentale Vito Volterra (CIVV), Roma, Italy.
Random sequential adsorption of trimers and hexamers.
Cieśla, Michał; Barbasz, Jakub
2013-12-01
Adsorption of trimers and hexamers built of identical spheres was studied numerically using the random sequential adsorption (RSA) algorithm. Particles were adsorbed on a two-dimensional, flat and homogeneous surface. Numerical simulations allowed us to determine the maximal random coverage ratio, RSA kinetics as well as the available surface function (ASF), which is crucial for determining the kinetics of the adsorption process obtained experimentally. Additionally, the density autocorrelation function was measured. All the results were compared with previous results obtained for spheres, dimers and tetramers. PMID:24193213
Temporal correlations and device-independent randomness
NASA Astrophysics Data System (ADS)
Mal, Shiladitya; Banik, Manik; Choudhary, Sujit K.
2016-07-01
Leggett-Garg inequalities (LGI) are constraints on certain combinations of temporal correlations obtained by measuring one and the same system at two different instants of time. The usual derivations of LGI assume macroscopic realism per se and noninvasive measurability. We derive these inequalities under a different set of assumptions, namely the assumptions of predictability and no signaling in time (NSIT). As a novel implication of this derivation, we find application of LGI in randomness certification. It turns out that randomness can be certified from temporal correlations, even without knowing the details of the experimental devices, provided the observed correlations violate LGI but satisfy NSIT.
Sample controllability of impulsive differential systems with random coefficients
NASA Astrophysics Data System (ADS)
Zhang, Shuorui; Sun, Jitao
2016-07-01
In this paper, we investigate the controllability of impulsive differential systems with random coefficients. Impulsive differential systems with random coefficients are a different stochastic model from stochastic differential equations. Sufficient conditions of sample controllability for impulsive differential systems with random coefficients are obtained by using random Sadovskii's fixed-point theorem. Finally, an example is given to illustrate our results.
76 FR 79204 - Random Drug Testing Rate for Covered Crewmembers
Federal Register 2010, 2011, 2012, 2013, 2014
2011-12-21
... SECURITY Coast Guard Random Drug Testing Rate for Covered Crewmembers AGENCY: Coast Guard, DHS. ACTION: Notice of minimum random drug testing rate. SUMMARY: The Coast Guard has set the calendar year 2012 minimum random drug testing rate at 50 percent of covered crewmembers. DATES: The minimum random...
76 FR 1448 - Random Drug Testing Rate for Covered Crewmembers
Federal Register 2010, 2011, 2012, 2013, 2014
2011-01-10
... SECURITY Coast Guard Random Drug Testing Rate for Covered Crewmembers AGENCY: Coast Guard, DHS. ACTION: Notice of minimum random drug testing rate. SUMMARY: The Coast Guard has set the calendar year 2011 minimum random drug testing rate at 50 percent of covered crewmembers. DATES: The minimum random...
Random Vibrations: Assessment of the State of the Art
Paez, T.L.
1999-02-23
Random vibration is the phenomenon wherein random excitation applied to a mechanical system induces random response. We summarize the state of the art in random vibration analysis and testing, commenting on history, linear and nonlinear analysis, the analysis of large-scale systems, and probabilistic structural testing.
Reducing financial avalanches by random investments.
Biondo, Alessio Emanuele; Pluchino, Alessandro; Rapisarda, Andrea; Helbing, Dirk
2013-12-01
Building on similarities between earthquakes and extreme financial events, we use a self-organized criticality-generating model to study herding and avalanche dynamics in financial markets. We consider a community of interacting investors, distributed in a small-world network, who bet on the bullish (increasing) or bearish (decreasing) behavior of the market which has been specified according to the S&P 500 historical time series. Remarkably, we find that the size of herding-related avalanches in the community can be strongly reduced by the presence of a relatively small percentage of traders, randomly distributed inside the network, who adopt a random investment strategy. Our findings suggest a promising strategy to limit the size of financial bubbles and crashes. We also obtain that the resulting wealth distribution of all traders corresponds to the well-known Pareto power law, while that of random traders is exponential. In other words, for technical traders, the risk of losses is much greater than the probability of gains compared to those of random traders. PMID:24483518
Random-phase metasurfaces at optical wavelengths
Pors, Anders; Ding, Fei; Chen, Yiting; Radko, Ilya P.; Bozhevolnyi, Sergey I.
2016-01-01
Random-phase metasurfaces, in which the constituents scatter light with random phases, have the property that an incident plane wave will diffusely scatter, hereby leading to a complex far-field response that is most suitably described by statistical means. In this work, we present and exemplify the statistical description of the far-field response, particularly highlighting how the response for polarised and unpolarised light might be alike or different depending on the correlation of scattering phases for two orthogonal polarisations. By utilizing gap plasmon-based metasurfaces, consisting of an optically thick gold film overlaid by a subwavelength thin glass spacer and an array of gold nanobricks, we design and realize random-phase metasurfaces at a wavelength of 800 nm. Optical characterisation of the fabricated samples convincingly demonstrates the diffuse scattering of reflected light, with statistics obeying the theoretical predictions. We foresee the use of random-phase metasurfaces for camouflage applications and as high-quality reference structures in dark-field microscopy, while the control of the statistics for polarised and unpolarised light might find usage in security applications. Finally, by incorporating a certain correlation between scattering by neighbouring metasurface constituents new types of functionalities can be realised, such as a Lambertian reflector. PMID:27328635
Effect of randomness in logistic maps
NASA Astrophysics Data System (ADS)
Khaleque, Abdul; Sen, Parongama
2015-01-01
We study a random logistic map xt+1 = atxt[1 - xt] where at are bounded (q1 ≤ at ≤ q2), random variables independently drawn from a distribution. xt does not show any regular behavior in time. We find that xt shows fully ergodic behavior when the maximum allowed value of at is 4. However
UNSTEADY DISPERSION IN RANDOM INTERMITTENT FLOW
The longitudinal dispersion coefficient of a conservative tracer was calculated from flow tests in a dead-end pipe loop system. Flow conditions for these tests ranged from laminar to transitional flow, and from steady to intermittent and random. Two static mixers linked in series...
Detecting targets hidden in random forests
NASA Astrophysics Data System (ADS)
Kouritzin, Michael A.; Luo, Dandan; Newton, Fraser; Wu, Biao
2009-05-01
Military tanks, cargo or troop carriers, missile carriers or rocket launchers often hide themselves from detection in the forests. This plagues the detection problem of locating these hidden targets. An electro-optic camera mounted on a surveillance aircraft or unmanned aerial vehicle is used to capture the images of the forests with possible hidden targets, e.g., rocket launchers. We consider random forests of longitudinal and latitudinal correlations. Specifically, foliage coverage is encoded with a binary representation (i.e., foliage or no foliage), and is correlated in adjacent regions. We address the detection problem of camouflaged targets hidden in random forests by building memory into the observations. In particular, we propose an efficient algorithm to generate random forests, ground, and camouflage of hidden targets with two dimensional correlations. The observations are a sequence of snapshots consisting of foliage-obscured ground or target. Theoretically, detection is possible because there are subtle differences in the correlations of the ground and camouflage of the rocket launcher. However, these differences are well beyond human perception. To detect the presence of hidden targets automatically, we develop a Markov representation for these sequences and modify the classical filtering equations to allow the Markov chain observation. Particle filters are used to estimate the position of the targets in combination with a novel random weighting technique. Furthermore, we give positive proof-of-concept simulations.
Randomized spatial context for object search.
Jiang, Yuning; Meng, Jingjing; Yuan, Junsong; Luo, Jiebo
2015-06-01
Searching visual objects in large image or video data sets is a challenging problem, because it requires efficient matching and accurate localization of query objects that often occupy a small part of an image. Although spatial context has been shown to help produce more reliable detection than methods that match local features individually, how to extract appropriate spatial context remains an open problem. Instead of using fixed-scale spatial context, we propose a randomized approach to deriving spatial context, in the form of spatial random partition. The effect of spatial context is achieved by averaging the matching scores over multiple random patches. Our approach offers three benefits: 1) the aggregation of the matching scores over multiple random patches provides robust local matching; 2) the matched objects can be directly identified on the pixelwise confidence map, which results in efficient object localization; and 3) our algorithm lends itself to easy parallelization and also allows a flexible tradeoff between accuracy and speed through adjusting the number of partition times. Both theoretical studies and experimental comparisons with the state-of-the-art methods validate the advantages of our approach. PMID:25781874
Recruiting Participants for Randomized Controlled Trials
ERIC Educational Resources Information Center
Gallagher, H. Alix; Roschelle, Jeremy; Feng, Mingyu
2014-01-01
The objective of this study was to look across strategies used in a wide range of studies to build a framework for researchers to use in conceptualizing the recruitment process. This paper harvests lessons learned across 19 randomized controlled trials in K-12 school settings conducted by a leading research organization to identify strategies that…
Reporting Randomized Controlled Trials in Education
ERIC Educational Resources Information Center
Mayo-Wilson, Evan; Grant, Sean; Montgomery, Paul
2014-01-01
Randomized controlled trials (RCTs) are increasingly used to evaluate programs and interventions in order to inform education policy and practice. High quality reports of these RCTs are needed for interested readers to understand the rigor of the study, the interventions tested, and the context in which the evaluation took place (Mayo-Wilson et…
A Model for Random Student Drug Testing
ERIC Educational Resources Information Center
Nelson, Judith A.; Rose, Nancy L.; Lutz, Danielle
2011-01-01
The purpose of this case study was to examine random student drug testing in one school district relevant to: (a) the perceptions of students participating in competitive extracurricular activities regarding drug use and abuse; (b) the attitudes and perceptions of parents, school staff, and community members regarding student drug involvement; (c)…
HESS Opinions "A random walk on water"
NASA Astrophysics Data System (ADS)
Koutsoyiannis, D.
2009-10-01
According to the traditional notion of randomness and uncertainty, natural phenomena are separated into two mutually exclusive components, random (or stochastic) and deterministic. Within this dichotomous logic, the deterministic part supposedly represents cause-effect relationships and, thus, is physics and science (the "good"), whereas randomness has little relationship with science and no relationship with understanding (the "evil"). We argue that such views should be reconsidered by admitting that uncertainty is an intrinsic property of nature, that causality implies dependence of natural processes in time, thus suggesting predictability, but even the tiniest uncertainty (e.g., in initial conditions) may result in unpredictability after a certain time horizon. On these premises it is possible to shape a consistent stochastic representation of natural processes, in which predictability (suggested by deterministic laws) and unpredictability (randomness) coexist and are not separable or additive components. Deciding which of the two dominates is simply a matter of specifying the time horizon of the prediction. Long horizons of prediction are inevitably associated with high uncertainty, whose quantification relies on understanding the long-term stochastic properties of the processes.
HESS Opinions "A random walk on water"
NASA Astrophysics Data System (ADS)
Koutsoyiannis, D.
2010-03-01
According to the traditional notion of randomness and uncertainty, natural phenomena are separated into two mutually exclusive components, random (or stochastic) and deterministic. Within this dichotomous logic, the deterministic part supposedly represents cause-effect relationships and, thus, is physics and science (the "good"), whereas randomness has little relationship with science and no relationship with understanding (the "evil"). Here I argue that such views should be reconsidered by admitting that uncertainty is an intrinsic property of nature, that causality implies dependence of natural processes in time, thus suggesting predictability, but even the tiniest uncertainty (e.g. in initial conditions) may result in unpredictability after a certain time horizon. On these premises it is possible to shape a consistent stochastic representation of natural processes, in which predictability (suggested by deterministic laws) and unpredictability (randomness) coexist and are not separable or additive components. Deciding which of the two dominates is simply a matter of specifying the time horizon and scale of the prediction. Long horizons of prediction are inevitably associated with high uncertainty, whose quantification relies on the long-term stochastic properties of the processes.
Colloidal diffusion over a random landscape
NASA Astrophysics Data System (ADS)
Su, Yun; Ma, Xiao-Guang; Lai, Pik-Yin; Tong, Penger
2015-03-01
A two-dimensional quenched random energy landscape is generated by using a randomly packed layer of colloidal spheres of two different sizes fixed on a glass substrate. A number of monodisperse particles diffuse on the top of the first layer particles. The diffusing particles in water feel the gravitational energy landscape U(x,y) generated by the modulated surface of the first layer particles. The trajectories of the particles are obtained by optical microscopy and particle tracking. The energy landscape U(x,y) is obtained from the measured population histogram P(x,y) of the diffusing particles via the Boltzmann distribution, P(x,y) =exp[-U(x,y)/ k_BT], where k_B T is the thermal energy of the particles. The distribution of the energy barrier heights is obtained from the measured U(x,y). From the particle's trajectories, we obtain the dynamical properties of the diffusing particles over the random energy landscape, such as the mean square displacement and distribution of the escape time across the energy barriers. A quantitative relationship between the long-time diffusion coefficient and the random energy landscape is found experimentally, which is in good agreement with the theoretical prediction. *Work supported in part by the Research Grants Council of Hong Kong SAR.
Object Recognition and Random Image Structure Evolution
ERIC Educational Resources Information Center
Sadr, Jvid; Sinha, Pawan
2004-01-01
We present a technique called Random Image Structure Evolution (RISE) for use in experimental investigations of high-level visual perception. Potential applications of RISE include the quantitative measurement of perceptual hysteresis and priming, the study of the neural substrates of object perception, and the assessment and detection of subtle…
Reducing financial avalanches by random investments
NASA Astrophysics Data System (ADS)
Biondo, Alessio Emanuele; Pluchino, Alessandro; Rapisarda, Andrea; Helbing, Dirk
2013-12-01
Building on similarities between earthquakes and extreme financial events, we use a self-organized criticality-generating model to study herding and avalanche dynamics in financial markets. We consider a community of interacting investors, distributed in a small-world network, who bet on the bullish (increasing) or bearish (decreasing) behavior of the market which has been specified according to the S&P 500 historical time series. Remarkably, we find that the size of herding-related avalanches in the community can be strongly reduced by the presence of a relatively small percentage of traders, randomly distributed inside the network, who adopt a random investment strategy. Our findings suggest a promising strategy to limit the size of financial bubbles and crashes. We also obtain that the resulting wealth distribution of all traders corresponds to the well-known Pareto power law, while that of random traders is exponential. In other words, for technical traders, the risk of losses is much greater than the probability of gains compared to those of random traders.
A Sweet Tasting Demonstration of Random Occurrences.
ERIC Educational Resources Information Center
Christopher, Andrew N.; Marek, Pam
2002-01-01
Discusses a game in which students must guess the flavor of LifeSaver candy without the aid of sight and smell. Explains that this demonstration assists students to understand the phenomenon of random occurrences. Describes how the presentation is conducted as well as the outcomes of the demonstration. (CMK)
A Random Walk on a Circular Path
ERIC Educational Resources Information Center
Ching, W.-K.; Lee, M. S.
2005-01-01
This short note introduces an interesting random walk on a circular path with cards of numbers. By using high school probability theory, it is proved that under some assumptions on the number of cards, the probability that a walker will return to a fixed position will tend to one as the length of the circular path tends to infinity.
Random Walk Method for Potential Problems
NASA Technical Reports Server (NTRS)
Krishnamurthy, T.; Raju, I. S.
2002-01-01
A local Random Walk Method (RWM) for potential problems governed by Lapalace's and Paragon's equations is developed for two- and three-dimensional problems. The RWM is implemented and demonstrated in a multiprocessor parallel environment on a Beowulf cluster of computers. A speed gain of 16 is achieved as the number of processors is increased from 1 to 23.
Digital servo control of random sound fields
NASA Technical Reports Server (NTRS)
Nakich, R. B.
1973-01-01
It is necessary to place number of sensors at different positions in sound field to determine actual sound intensities to which test object is subjected. It is possible to determine whether specification is being met adequately or exceeded. Since excitation is of random nature, signals are essentially coherent and it is impossible to obtain true average.
Average Transmission Probability of a Random Stack
ERIC Educational Resources Information Center
Lu, Yin; Miniatura, Christian; Englert, Berthold-Georg
2010-01-01
The transmission through a stack of identical slabs that are separated by gaps with random widths is usually treated by calculating the average of the logarithm of the transmission probability. We show how to calculate the average of the transmission probability itself with the aid of a recurrence relation and derive analytical upper and lower…
Top-hat random fiber Bragg grating.
Yin, Hongwei; Gbadebo, Adenowo; Turitsyna, Elena G
2015-08-01
We examined the possibility of using noise or pseudo-random variations of the refractive index in the design of fiber Bragg gratings (FBGs). We demonstrated theoretically and experimentally that top-hat FBGs may be designed and fabricated using this approach. The reflectivity of the fabricated top-hat FBG matches quite well with that of the designed one. PMID:26258365
Species selection and random drift in macroevolution.
Chevin, Luis-Miguel
2016-03-01
Species selection resulting from trait-dependent speciation and extinction is increasingly recognized as an important mechanism of phenotypic macroevolution. However, the recent bloom in statistical methods quantifying this process faces a scarcity of dynamical theory for their interpretation, notably regarding the relative contributions of deterministic versus stochastic evolutionary forces. I use simple diffusion approximations of birth-death processes to investigate how the expected and random components of macroevolutionary change depend on phenotype-dependent speciation and extinction rates, as can be estimated empirically. I show that the species selection coefficient for a binary trait, and selection differential for a quantitative trait, depend not only on differences in net diversification rates (speciation minus extinction), but also on differences in species turnover rates (speciation plus extinction), especially in small clades. The randomness in speciation and extinction events also produces a species-level equivalent to random genetic drift, which is stronger for higher turnover rates. I then show how microevolutionary processes including mutation, organismic selection, and random genetic drift cause state transitions at the species level, allowing comparison of evolutionary forces across levels. A key parameter that would be needed to apply this theory is the distribution and rate of origination of new optimum phenotypes along a phylogeny. PMID:26880617
ERIC Educational Resources Information Center
Huang, Hung-Yu; Wang, Wen-Chung
2014-01-01
The DINA (deterministic input, noisy, and gate) model has been widely used in cognitive diagnosis tests and in the process of test development. The outcomes known as slip and guess are included in the DINA model function representing the responses to the items. This study aimed to extend the DINA model by using the random-effect approach to allow…
Random-phase metasurfaces at optical wavelengths.
Pors, Anders; Ding, Fei; Chen, Yiting; Radko, Ilya P; Bozhevolnyi, Sergey I
2016-01-01
Random-phase metasurfaces, in which the constituents scatter light with random phases, have the property that an incident plane wave will diffusely scatter, hereby leading to a complex far-field response that is most suitably described by statistical means. In this work, we present and exemplify the statistical description of the far-field response, particularly highlighting how the response for polarised and unpolarised light might be alike or different depending on the correlation of scattering phases for two orthogonal polarisations. By utilizing gap plasmon-based metasurfaces, consisting of an optically thick gold film overlaid by a subwavelength thin glass spacer and an array of gold nanobricks, we design and realize random-phase metasurfaces at a wavelength of 800 nm. Optical characterisation of the fabricated samples convincingly demonstrates the diffuse scattering of reflected light, with statistics obeying the theoretical predictions. We foresee the use of random-phase metasurfaces for camouflage applications and as high-quality reference structures in dark-field microscopy, while the control of the statistics for polarised and unpolarised light might find usage in security applications. Finally, by incorporating a certain correlation between scattering by neighbouring metasurface constituents new types of functionalities can be realised, such as a Lambertian reflector. PMID:27328635
Random-phase metasurfaces at optical wavelengths
NASA Astrophysics Data System (ADS)
Pors, Anders; Ding, Fei; Chen, Yiting; Radko, Ilya P.; Bozhevolnyi, Sergey I.
2016-06-01
Random-phase metasurfaces, in which the constituents scatter light with random phases, have the property that an incident plane wave will diffusely scatter, hereby leading to a complex far-field response that is most suitably described by statistical means. In this work, we present and exemplify the statistical description of the far-field response, particularly highlighting how the response for polarised and unpolarised light might be alike or different depending on the correlation of scattering phases for two orthogonal polarisations. By utilizing gap plasmon-based metasurfaces, consisting of an optically thick gold film overlaid by a subwavelength thin glass spacer and an array of gold nanobricks, we design and realize random-phase metasurfaces at a wavelength of 800 nm. Optical characterisation of the fabricated samples convincingly demonstrates the diffuse scattering of reflected light, with statistics obeying the theoretical predictions. We foresee the use of random-phase metasurfaces for camouflage applications and as high-quality reference structures in dark-field microscopy, while the control of the statistics for polarised and unpolarised light might find usage in security applications. Finally, by incorporating a certain correlation between scattering by neighbouring metasurface constituents new types of functionalities can be realised, such as a Lambertian reflector.
Random template banks and relaxed lattice coverings
Messenger, C.; Prix, R.; Papa, M. A.
2009-05-15
Template-based searches for gravitational waves are often limited by the computational cost associated with searching large parameter spaces. The study of efficient template banks, in the sense of using the smallest number of templates, is therefore of great practical interest. The traditional approach to template-bank construction requires every point in parameter space to be covered by at least one template, which rapidly becomes inefficient at higher dimensions. Here we study an alternative approach, where any point in parameter space is covered only with a given probability {eta}<1. We find that by giving up complete coverage in this way, large reductions in the number of templates are possible, especially at higher dimensions. The prime examples studied here are random template banks in which templates are placed randomly with uniform probability over the parameter space. In addition to its obvious simplicity, this method turns out to be surprisingly efficient. We analyze the statistical properties of such random template banks, and compare their efficiency to traditional lattice coverings. We further study relaxed lattice coverings (using Z{sub n} and A{sub n}* lattices), which similarly cover any signal location only with probability {eta}. The relaxed A{sub n}* lattice is found to yield the most efficient template banks at low dimensions (n < or approx. 10), while random template banks increasingly outperform any other method at higher dimensions.
Index statistical properties of sparse random graphs
NASA Astrophysics Data System (ADS)
Metz, F. L.; Stariolo, Daniel A.
2015-10-01
Using the replica method, we develop an analytical approach to compute the characteristic function for the probability PN(K ,λ ) that a large N ×N adjacency matrix of sparse random graphs has K eigenvalues below a threshold λ . The method allows to determine, in principle, all moments of PN(K ,λ ) , from which the typical sample-to-sample fluctuations can be fully characterized. For random graph models with localized eigenvectors, we show that the index variance scales linearly with N ≫1 for |λ |>0 , with a model-dependent prefactor that can be exactly calculated. Explicit results are discussed for Erdös-Rényi and regular random graphs, both exhibiting a prefactor with a nonmonotonic behavior as a function of λ . These results contrast with rotationally invariant random matrices, where the index variance scales only as lnN , with an universal prefactor that is independent of λ . Numerical diagonalization results confirm the exactness of our approach and, in addition, strongly support the Gaussian nature of the index fluctuations.
Sampled-Data Consensus Over Random Networks
NASA Astrophysics Data System (ADS)
Wu, Junfeng; Meng, Ziyang; Yang, Tao; Shi, Guodong; Johansson, Karl Henrik
2016-09-01
This paper considers the consensus problem for a network of nodes with random interactions and sampled-data control actions. We first show that consensus in expectation, in mean square, and almost surely are equivalent for a general random network model when the inter-sampling interval and network size satisfy a simple relation. The three types of consensus are shown to be simultaneously achieved over an independent or a Markovian random network defined on an underlying graph with a directed spanning tree. For both independent and Markovian random network models, necessary and sufficient conditions for mean-square consensus are derived in terms of the spectral radius of the corresponding state transition matrix. These conditions are then interpreted as the existence of critical value on the inter-sampling interval, below which global mean-square consensus is achieved and above which the system diverges in mean-square sense for some initial states. Finally, we establish an upper bound on the inter-sampling interval below which almost sure consensus is reached, and a lower bound on the inter-sampling interval above which almost sure divergence is reached. Some numerical simulations are given to validate the theoretical results and some discussions on the critical value of the inter-sampling intervals for the mean-square consensus are provided.
Wave propagation on a random lattice
Sahlmann, Hanno
2010-09-15
Motivated by phenomenological questions in quantum gravity, we consider the propagation of a scalar field on a random lattice. We describe a procedure to calculate the dispersion relation for the field by taking a limit of a periodic lattice. We use this to calculate the lowest order coefficients of the dispersion relation for a specific one-dimensional model.
Order and Frustration in Random Media.
NASA Astrophysics Data System (ADS)
Rubinstein, Michael
Models of two-dimensional disordered materials, designed to illuminate the behavior of frustrated liquids and glasses, are discussed. The existence of quenched analogues of equilibrium hexatic phase in randomly packed planar arrays of hard spheres of two different sizes and in the binary arrays of hard discs constructed on computer from a seed cluster by several deterministic algorithms is suggested. These phases are characterized by short -range translational order, but very long-range correlations in the orientations of near neighbour "bonds". This phenomena is attributed to the tendency of quenched impurities to trap dislocations. Dense, randomly packed assemblies of disks on a two-dimensional manifold of constant negative curvature allow one to study packing problems very similar to those found in three-dimensional flat space. Results for radial distribution functions, bond angle correlations, packing fractions, and average coordination numbers for frustrated computer-generated arrays on two-dimensional manifolds for a variety of curvatures are presented. A renormalization group treatment of quenched disorder in 2d flat space and an unusual reentrant phase transition is developed. The model studied is motivated by the interaction of quenched impurities in a solid with dislocations in thermodynamic equilibrium. The system is formally identical to a two-dimensional XY magnet with random Dzyaloshinskii-Moriya interactions. It is mapped onto a Coulomb gas with quenched random dipoles. For small amounts of randomness, the behavior with decreasing temperatures is first paramagnetic, then quasi-long-range ferromagnetic, and family becomes paramagnetic again via second, reentrant phase transition. For large amounts of randomness, the ferromagnetic phase is destroyed entirely. The phase transitions are driven by an unbinding of vortices, just as in pure XY models. In contrast to pure XY models, the exponents (eta) and the spin-wave stiffness are nonuniversal at T
Random sphere packing model of heterogeneous propellants
NASA Astrophysics Data System (ADS)
Kochevets, Sergei Victorovich
It is well recognized that combustion of heterogeneous propellants is strongly dependent on the propellant morphology. Recent developments in computing systems make it possible to start three-dimensional modeling of heterogeneous propellant combustion. A key component of such large scale computations is a realistic model of industrial propellants which retains the true morphology---a goal never achieved before. The research presented develops the Random Sphere Packing Model of heterogeneous propellants and generates numerical samples of actual industrial propellants. This is done by developing a sphere packing algorithm which randomly packs a large number of spheres with a polydisperse size distribution within a rectangular domain. First, the packing code is developed, optimized for performance, and parallelized using the OpenMP shared memory architecture. Second, the morphology and packing fraction of two simple cases of unimodal and bimodal packs are investigated computationally and analytically. It is shown that both the Loose Random Packing and Dense Random Packing limits are not well defined and the growth rate of the spheres is identified as the key parameter controlling the efficiency of the packing. For a properly chosen growth rate, computational results are found to be in excellent agreement with experimental data. Third, two strategies are developed to define numerical samples of polydisperse heterogeneous propellants: the Deterministic Strategy and the Random Selection Strategy. Using these strategies, numerical samples of industrial propellants are generated. The packing fraction is investigated and it is shown that the experimental values of the packing fraction can be achieved computationally. It is strongly believed that this Random Sphere Packing Model of propellants is a major step forward in the realistic computational modeling of heterogeneous propellant of combustion. In addition, a method of analysis of the morphology of heterogeneous
Can observed randomness be certified to be fully intrinsic?
Dhara, Chirag; de la Torre, Gonzalo; Acín, Antonio
2014-03-14
In general, any observed random process includes two qualitatively different forms of randomness: apparent randomness, which results both from ignorance or lack of control of degrees of freedom in the system, and intrinsic randomness, which is not ascribable to any such cause. While classical systems only possess the first kind of randomness, quantum systems may exhibit some intrinsic randomness. In this Letter, we provide quantum processes in which all the observed randomness is fully intrinsic. These results are derived under minimal assumptions: the validity of the no-signaling principle and an arbitrary (but not absolute) lack of freedom of choice. Our results prove that quantum predictions cannot be completed already in simple finite scenarios, for instance of three parties performing two dichotomic measurements. Moreover, the observed randomness tends to a perfect random bit when increasing the number of parties, thus, defining an explicit process attaining full randomness amplification. PMID:24679271
NASA Astrophysics Data System (ADS)
Moyer, Steve; Uhl, Elizabeth R.
2015-05-01
For more than 50 years, the U.S. Army RDECOM CERDEC Night Vision and Electronic Sensors Directorate (NVESD) has been studying and modeling the human visual discrimination process as it pertains to military imaging systems. In order to develop sensor performance models, human observers are trained to expert levels in the identification of military vehicles. From 1998 until 2006, the experimental stimuli were block randomized, meaning that stimuli with similar difficulty levels (for example, in terms of distance from target, blur, noise, etc.) were presented together in blocks of approximately 24 images but the order of images within the block was random. Starting in 2006, complete randomization came into vogue, meaning that difficulty could change image to image. It was thought that this would provide a more statistically robust result. In this study we investigated the impact of the two types of randomization on performance in two groups of observers matched for skill to create equivalent groups. It is hypothesized that Soldiers in the Complete Randomized condition will have to shift their decision criterion more frequently than Soldiers in the Block Randomization group and this shifting is expected to impede performance so that Soldiers in the Block Randomized group perform better.
Generation of kth-order random toposequences
NASA Astrophysics Data System (ADS)
Odgers, Nathan P.; McBratney, Alex. B.; Minasny, Budiman
2008-05-01
The model presented in this paper derives toposequences from a digital elevation model (DEM). It is written in ArcInfo Macro Language (AML). The toposequences are called kth-order random toposequences, because they take a random path uphill to the top of a hill and downhill to a stream or valley bottom from a randomly selected seed point, and they are located in a streamshed of order k according to a particular stream-ordering system. We define a kth-order streamshed as the area of land that drains directly to a stream segment of stream order k. The model attempts to optimise the spatial configuration of a set of derived toposequences iteratively by using simulated annealing to maximise the total sum of distances between each toposequence hilltop in the set. The user is able to select the order, k, of the derived toposequences. Toposequences are useful for determining soil sampling locations for use in collecting soil data for digital soil mapping applications. Sampling locations can be allocated according to equal elevation or equal-distance intervals along the length of the toposequence, for example. We demonstrate the use of this model for a study area in the Hunter Valley of New South Wales, Australia. Of the 64 toposequences derived, 32 were first-order random toposequences according to Strahler's stream-ordering system, and 32 were second-order random toposequences. The model that we present in this paper is an efficient method for sampling soil along soil toposequences. The soils along a toposequence are related to each other by the topography they are found in, so soil data collected by this method is useful for establishing soil-landscape rules for the preparation of digital soil maps.
Interfaces in Random Field Ising Systems
NASA Astrophysics Data System (ADS)
Seppälä, Eira
2001-03-01
Domain walls are studied in random field Ising magnets at T=0 in two and three dimensions using exact ground state calculations. In 2D below the random field strength dependent length scale Lb the walls exhibit a super-rough behavior with a roughness exponent greater than unity ζ ~= 1.20 ± 0.05. The nearest-neighbor height difference probability distribution depends on the system size below L_b. Above Lb domains become fractal, ζ ~= 1.(E. T. Seppälä, V. Petäjä, and M. J. Alava, Phys. Rev. E 58), R5217 (1998). The energy fluctuation exponent has a value θ=1, contradicting the exponent relation θ = 2ζ -1 due to the broken scale-invariance, below Lb and vanishes for system sizes above L_b. The broken scale-invariance should be manifest also in Kardar-Parisi-Zhang problem with random-field noise.(E. Frey, U. C. Täuber, and H. K. Janssen, Europhys. Lett. 47), 14 (1999). In 3D there exists a transition between ferromagnetic and paramagnetic phases at the critical random field strength (Δ/J)_c. Below (Δ/J)c the roughness exponent is also greater ζ ~= 0.73 ± 0.03 than the functional-renormalization-group calculation result ζ = (5-d)/3.(D. Fisher, Phys. Rev. Lett. 56), 1964 (1986).(P. Chauve, P. Le Doussal, and K. Wiese, cond-mat/0006056.) The height differences are system size dependent in 3D, as well. The behavior of the domain walls in 2D below Lb with a constant external field, i.e., the random-bulk wetting, is demonstrated.(E. T. Seppälä, I. Sillanpää, and M. J. Alava, unpublished.)
Randomization and the Gross-Pitaevskii Hierarchy
NASA Astrophysics Data System (ADS)
Sohinger, Vedran; Staffilani, Gigliola
2015-10-01
We study the Gross-Pitaevskii hierarchy on the spatial domain . By using an appropriate randomization of the Fourier coefficients in the collision operator, we prove an averaged form of the main estimate which is used in order to contract the Duhamel terms that occur in the study of the hierarchy. In the averaged estimate, we do not need to integrate in the time variable. An averaged spacetime estimate for this range of regularity exponents then follows as a direct corollary. The range of regularity exponents that we obtain is . It was shown in our previous joint work with Gressman (J Funct Anal 266(7):4705-4764, 2014) that the range is sharp in the corresponding deterministic spacetime estimate. This is in contrast to the non-periodic setting, which was studied by Klainerman and Machedon (Commun Math Phys 279(1):169-185, 2008), where the spacetime estimate is known to hold whenever . The goal of our paper is to extend the range of α in this class of estimates in a probabilistic sense. We use the new estimate and the ideas from its proof in order to study randomized forms of the Gross-Pitaevskii hierarchy. More precisely, we consider hierarchies similar to the Gross-Pitaevskii hierarchy, but in which the collision operator has been randomized. For these hierarchies, we show convergence to zero in low regularity Sobolev spaces of Duhamel expansions of fixed deterministic density matrices. We believe that the study of the randomized collision operators could be the first step in the understanding of a nonlinear form of randomization.
Cluster randomized trials for pharmacy practice research.
Gums, Tyler; Carter, Barry; Foster, Eric
2016-06-01
Introduction Cluster randomized trials (CRTs) are now the gold standard in health services research, including pharmacy-based interventions. Studies of behaviour, epidemiology, lifestyle modifications, educational programs, and health care models are utilizing the strengths of cluster randomized analyses. Methodology The key property of CRTs is the unit of randomization (clusters), which may be different from the unit of analysis (individual). Subject sample size and, ideally, the number of clusters is determined by the relationship of between-cluster and within-cluster variability. The correlation among participants recruited from the same cluster is known as the intraclass correlation coefficient (ICC). Generally, having more clusters with smaller ICC values will lead to smaller sample sizes. When selecting clusters, stratification before randomization may be useful in decreasing imbalances between study arms. Participant recruitment methods can differ from other types of randomized trials, as blinding a behavioural intervention cannot always be done. When to use CRTs can yield results that are relevant for making "real world" decisions. CRTs are often used in non-therapeutic intervention studies (e.g. change in practice guidelines). The advantages of CRT design in pharmacy research have been avoiding contamination and the generalizability of the results. A large CRT that studied physician-pharmacist collaborative management of hypertension is used in this manuscript as a CRT example. The trial, entitled Collaboration Among Pharmacists and physicians To Improve Outcomes Now (CAPTION), was implemented in primary care offices in the United States for hypertensive patients. Limitations CRT design limitations include the need for a large number of clusters, high costs, increased training, increased monitoring, and statistical complexity. PMID:26715549
Randomized phase 1b trial of MOR103, a human antibody to GM-CSF, in multiple sclerosis
Asher, Aliya; Fryze, Waldemar; Kozubski, Wojciech; Wagner, Frank; Aram, Jehan; Tanasescu, Radu; Korolkiewicz, Roman P.; Dirnberger-Hertweck, Maren; Steidl, Stefan; Libretto, Susan E.; Sprenger, Till; Radue, Ernst W.
2015-01-01
Objectives: To determine the safety, pharmacokinetics (PK), and immunogenicity of the recombinant human monoclonal antibody MOR103 to granulocyte-macrophage colony-stimulating factor (GM-CSF) in patients with multiple sclerosis (MS) with clinical or MRI activity. Methods: In this 20-week, randomized, double-blind, placebo-controlled phase 1b dose-escalation trial (registration number NCT01517282), adults with relapsing-remitting MS (RRMS) or secondary progressive MS (SPMS) received an IV infusion of placebo (n = 6) or MOR103 0.5 (n = 8), 1.0 (n = 8), or 2.0 (n = 9) mg/kg every 2 weeks for 10 weeks. Patients had to have ≤10 gadolinium (Gd)-enhancing brain lesions on T1-weighted MRI at baseline. The primary objective was safety. Results: Most treatment-emergent adverse events (TEAEs) were mild to moderate in severity. The most frequent was nasopharyngitis. Between-group differences in TEAE numbers were small. There were no TEAE-related trial discontinuations, infusion-related reactions, or deaths. Nine patients experienced MS exacerbations: 3, 5, 1, and 0 patient(s) in the placebo, 0.5, 1.0, and 2.0 mg/kg groups, respectively. A few T1 Gd-enhancing lesions and/or new or enlarging T2 lesions indicative of inflammation were observed in all treatment groups. No clinically significant changes were observed in other clinical assessments or laboratory safety assessments. No anti-MOR103 antibodies were detected. PK evaluations indicated dose linearity with low/no drug accumulation over time. Conclusions: MOR103 was generally well-tolerated in patients with RRMS or SPMS. No evidence of immunogenicity was found. Classification of evidence: This phase 1b study provides Class I evidence that MOR103 has acceptable tolerability in patients with MS. PMID:26185773
Delucchi, Kevin L.; Prochaska, Judith J.
2015-01-01
Introduction: In an ethnically-diverse, uninsured psychiatric sample with co-occurring drug/alcohol addiction, we evaluated the feasibility and reproducibility of a tobacco treatment intervention. The intervention previously demonstrated efficacy in insured psychiatric and nonpsychiatric samples with 20.0%–25.0% abstinence at 18 months. Methods: Daily smokers, recruited in 2009–2010 from psychiatric units at an urban public hospital, were randomized to usual care (on-unit nicotine replacement plus quit advice) or intervention, which added a Transtheoretical-model tailored, computer-assisted intervention, stage-matched manual, brief counseling, and 10-week post-hospitalization nicotine replacement. Results: The sample (N = 100, 69% recruitment rate, age M = 40) was 56% racial/ethnic minority, 65% male, 79% unemployed, and 48% unstably housed, diagnosed with unipolar (54%) and bipolar (14%) depression and psychotic disorders (46%); 77% reported past-month illicit drug use. Prior to hospitalization, participants averaged 19 (SD = 11) cigarettes/day for 23 (SD = 13) years; 80% smoked within 30 minutes of awakening; 25% were preparing to quit. Encouraging and comparable to effects in the general population, 7-day point prevalence abstinence for intervention versus control was 12.5% versus 7.3% at 3 months, 17.5% versus 8.5% at 6 months, and 26.2% versus 16.7% at 12 months. Retention exceeded 80% over 12 months. The odds of abstinence increased over time, predicted by higher self-efficacy, greater perceived social status, and diagnosis of psychotic disorder compared to unipolar depression. Conclusions: Findings indicate uninsured smokers with serious mental illness can engage in tobacco treatment research with quit rates comparable to the general population. A larger investigation is warranted. Inclusion of diverse smokers with mental illness in clinical trials is supported and encouraged. PMID:26180227
Cherkin, Daniel C; Sherman, Karen J; Kahn, Janet; Erro, Janet H; Deyo, Richard A; Haneuse, Sebastien J; Cook, Andrea J
2009-01-01
Background Chronic back pain is a major public health problem and the primary reason patients seek massage treatment. Despite the growing use of massage for chronic low back pain, there have been few studies of its effectiveness. This trial will be the first evaluation of the effectiveness of relaxation massage for chronic back pain and the first large trial of a focused structural form of massage for this condition. Methods and Design A total of 399 participants (133 in each of three arms) between the ages of 20 and 65 years of age who have low back pain lasting at least 3 months will be recruited from an integrated health care delivery system. They will be randomized to one of two types of massage ("focused structural massage" or "relaxation massage"), or continued usual medical care. Ten massage treatments will be provided over 10 weeks. The primary outcomes, standard measures of dysfunction and bothersomeness of low back pain, will be assessed at baseline and after 10, 26, and 52 weeks by telephone interviewers masked to treatment assignment. General health status, satisfaction with back care, days of back-related disability, perceived stress, and use and costs of healthcare services for back pain will also be measured. Outcomes across assigned treatment groups will be compared using generalized estimating equations, accounting for participant correlation and adjusted for baseline value, age, and sex. For both primary outcome measures, this trial will have at least 85% power to detect the presence of a minimal clinically significant difference among the three treatment groups and 91% power for pairwise comparisons. Secondary analyses will compare the proportions of participants in each group that improve by a clinically meaningful amount. Conclusion Results of this trial will help clarify the value of two types of massage therapy for chronic low back pain. Trial registration Clinical Trials.gov NCT 00371384. PMID:19843340
Christensen, Britt; Ludvigsen, Maja; Nellemann, Birgitte; Kopchick, John J.; Honoré, Bent; Jørgensen, Jens Otto L.
2015-01-01
Introduction Despite implementation of the biological passport to detect erythropoietin abuse, a need for additional biomarkers remains. We used a proteomic approach to identify novel serum biomarkers of prolonged erythropoiesis-stimulating agent (ESA) exposure (Darbepoietin-α) and/or aerobic training. Trial Design Thirty-six healthy young males were randomly assigned to the following groups: Sedentary-placebo (n = 9), Sedentary-ESA (n = 9), Training-placebo (n = 10), or Training-ESA (n = 8). They were treated with placebo/Darbepoietin-α subcutaneously once/week for 10 weeks followed by a 3-week washout period. Training consisted of supervised biking 3/week for 13 weeks at the highest possible intensity. Serum was collected at baseline, week 3 (high dose Darbepoietin-α), week 10 (reduced dose Darbepoietin-α), and after a 3-week washout period. Methods Serum proteins were separated according to charge and molecular mass (2D-gel electrophoresis). The identity of proteins from spots exhibiting altered intensity was determined by mass spectrometry. Results Six protein spots changed in response to Darbepoietin-α treatment. Comparing all 4 experimental groups, two protein spots (serotransferrin and haptoglobin/haptoglobin related protein) showed a significant response to Darbepoietin-α treatment. The haptoglobin/haptoglobin related protein spot showed a significantly lower intensity in all subjects in the training-ESA group during the treatment period and increased during the washout period. Conclusion An isoform of haptoglobin/haptoglobin related protein could be a new anti-doping marker and merits further research. Trial Registration ClinicalTrials.gov NCT01320449 PMID:25679398
Jarvis, Joseph N; Meintjes, Graeme; Rebe, Kevin; Williams, Gertrude Ntombomzi; Bicanic, Tihana; Williams, Anthony; Schutz, Charlotte; Bekker, Linda-Gail; Wood, Robin; Harrison, Thomas S
2013-01-01
Background Interferon-γ is of key importance in the immune response to Cryptococcus neoformans. Mortality related to cryptococcal meningitis (CM) remains high, and novel treatment strategies are needed. We performed an RCT to determine whether addition of IFNγ to standard therapy increased the rate of clearance of cryptococcal infection in HIV-associated CM. Methods Patients were randomized to: (1) Amphotericin-B 1mg/kg/day plus 5-FC 100mg/kg/day for 2-weeks (Standard therapy), (2) Standard therapy plus IFNγ1b 100μg days 1 and 3 (IFNγ 2-doses), or (3) Standard therapy plus IFNγ1b 100μg days 1, 3, 5, 8, 10 and 12 (IFNγ 6-doses). Primary outcome was rate of clearance of cryptococcus from the CSF (early fungicidal activity, EFA) calculated from serial quantitative cultures, previously shown to be independently associated with survival. Results Rate of fungal clearance was significantly faster in IFNγ containing groups than with standard treatment. Mean EFA (logCFU/ml/day) was −0.49 with standard treatment, −0.64 with IFNγ 2-doses, and −0.64 with IFNγ 6-doses. Difference in EFA was −0.15 (95%CI −0.02- −0.27, p=0.02) between standard treatment and IFNγ 2-doses, and −0.15 (95%CI-0.05- −0.26, p=0.006) between standard treatment and IFNγ 6-doses. Mortality was 16% (14/88) at 2 weeks and 31% (27/87) at 10 weeks, with no significant difference between groups. All treatments were well tolerated. Conclusions Addition of short-course IFNγ to standard treatment significantly increased the rate of clearance of cryptococcal infection from the CSF, and was not associated with any increase in adverse events. Two doses of IFNγ are as effective as 6 doses. PMID:22421244
Williams, Alishia D.; O’Moore, Kathleen; Blackwell, Simon E.; Smith, Jessica; Holmes, Emily A.; Andrews, Gavin
2015-01-01
Background Accruing evidence suggests that positive imagery-based cognitive bias modification (CBM) could have potential as a standalone targeted intervention for depressive symptoms or as an adjunct to existing treatments. We sought to establish the benefit of this form of CBM when delivered prior to Internet cognitive behavioral therapy (iCBT) for depression Methods A randomized controlled trial (RCT) of a 1-week Internet-delivered positive CBM vs. an active control condition for participants (N=75, 69% female, mean age=42) meeting diagnostic criteria for major depression; followed by a 10-week iCBT program for both groups. Results Modified intent-to-treat marginal and mixed effect models demonstrated no significant difference between conditions following the CBM intervention or the iCBT program. In both conditions there were significant reductions (Cohen׳s d .57–1.58, 95% CI=.12–2.07) in primary measures of depression and interpretation bias (PHQ9, BDI-II, AST-D). Large effect size reductions (Cohen׳s d .81–1.32, 95% CI=.31–1.79) were observed for secondary measures of distress, disability, anxiety and repetitive negative thinking (K10, WHODAS, STAI, RTQ). Per protocol analyses conducted in the sample of participants who completed all seven sessions of CBM indicated between-group superiority of the positive over control group on depression symptoms (PHQ9, BDI-II) and psychological distress (K10) following CBM (Hedges g .55–.88, 95% CI=−.03–1.46) and following iCBT (PHQ9, K10). The majority (>70%) no longer met diagnostic criteria for depression at 3-month follow-up. Limitations The control condition contained many active components and therefore may have represented a smaller ‘dose’ of the positive condition. Conclusions Results provide preliminary support for the successful integration of imagery-based CBM into an existing Internet-based treatment for depression. PMID:25805405
Mixing thermodynamics of block-random copolymers
NASA Astrophysics Data System (ADS)
Beckingham, Bryan Scott
Random copolymerization of A and B monomers represents a versatile method to tune interaction strengths between polymers, as ArB random copolymers will exhibit a smaller effective Flory interaction parameter chi; (or interaction energy density X) upon mixing with A or B homopolymers than upon mixing A and B homopolymers with each other, and the ArB composition can be tuned continuously. Thus, the incorporation of a random copolymer block into the classical block copolymer architecture to yield "block-random" copolymers introduces an additional tuning mechanism for the control of structure-property relationships, as the interblock interactions and physical properties can be tuned continuously through the random block's composition. However, typical living or controlled polymerizations produce compositional gradients along the "random" block, which can in turn influence the phase behavior. This dissertation demonstrates a method by which narrow-distribution copolymers of styrene and isoprene of any desired composition, with no measurable down-chain gradient, are synthesized. This synthetic method is then utilized to incorporate random copolymers of styrene and isoprene as blocks into block-random copolymers in order to examine the resulting interblock mixing thermodynamics. A series of well-defined near-symmetric block and block-random copolymers (S-I, Bd-S, I-SrI, S-SrI and Bd-S rI diblocks, where S is polystyrene, I is polyisoprene and Bd is polybutadiene), with varying molecular weight and random-block composition are synthesized and the mixing thermodynamics---via comparison of their interaction energy densities, X---of their hydrogenated derivatives is examined through measurement of the order-disorder transition (ODT) temperature. Hydrogenated derivatives of I-SrI and S-SrI block-random copolymers, both wherein the styrene aromaticity is retained and derivatives wherein the styrene units are saturated to vinylcyclohexane (VCH), are found to hew closely to the
NASA Astrophysics Data System (ADS)
Miszczak, Jarosław Adam
2013-01-01
The presented package for the Mathematica computing system allows the harnessing of quantum random number generators (QRNG) for investigating the statistical properties of quantum states. The described package implements a number of functions for generating random states. The new version of the package adds the ability to use the on-line quantum random number generator service and implements new functions for retrieving lists of random numbers. Thanks to the introduced improvements, the new version provides faster access to high-quality sources of random numbers and can be used in simulations requiring large amount of random data. New version program summaryProgram title: TRQS Catalogue identifier: AEKA_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEKA_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 18 134 No. of bytes in distributed program, including test data, etc.: 2 520 49 Distribution format: tar.gz Programming language: Mathematica, C. Computer: Any supporting Mathematica in version 7 or higher. Operating system: Any platform supporting Mathematica; tested with GNU/Linux (32 and 64 bit). RAM: Case-dependent Supplementary material: Fig. 1 mentioned below can be downloaded. Classification: 4.15. External routines: Quantis software library (http://www.idquantique.com/support/quantis-trng.html) Catalogue identifier of previous version: AEKA_v1_0 Journal reference of previous version: Comput. Phys. Comm. 183(2012)118 Does the new version supersede the previous version?: Yes Nature of problem: Generation of random density matrices and utilization of high-quality random numbers for the purpose of computer simulation. Solution method: Use of a physical quantum random number generator and an on-line service providing access to the source of true random
Gunn, Rebecca; Russell, Jeremy K; Ary, Dennis V
2016-01-01
Background Worldwide, depression is rated as the fourth leading cause of disease burden and is projected to be the second leading cause of disability by 2020. Annual depression-related costs in the United States are estimated at US $210.5 billion, with employers bearing over 50% of these costs in productivity loss, absenteeism, and disability. Because most adults with depression never receive treatment, there is a need to develop effective interventions that can be more widely disseminated through new channels, such as employee assistance programs (EAPs), and directly to individuals who will not seek face-to-face care. Objective This study evaluated a self-guided intervention, using the MoodHacker mobile Web app to activate the use of cognitive behavioral therapy (CBT) skills in working adults with mild-to-moderate depression. It was hypothesized that MoodHacker users would experience reduced depression symptoms and negative cognitions, and increased behavioral activation, knowledge of depression, and functioning in the workplace. Methods A parallel two-group randomized controlled trial was conducted with 300 employed adults exhibiting mild-to-moderate depression. Participants were recruited from August 2012 through April 2013 in partnership with an EAP and with outreach through a variety of additional non-EAP organizations. Participants were blocked on race/ethnicity and then randomly assigned within each block to receive, without clinical support, either the MoodHacker intervention (n=150) or alternative care consisting of links to vetted websites on depression (n=150). Participants in both groups completed online self-assessment surveys at baseline, 6 weeks after baseline, and 10 weeks after baseline. Surveys assessed (1) depression symptoms, (2) behavioral activation, (3) negative thoughts, (4) worksite outcomes, (5) depression knowledge, and (6) user satisfaction and usability. After randomization, all interactions with subjects were automated with the
Cakir, Leyla; Dilbaz, Berna; Caliskan, Eray; Dede, F Suat; Dilbaz, Serdar; Haberal, Ali
2005-05-01
The objective of this prospective randomized placebo-controlled study was to determine the effectiveness of 400 mug oral and 400 mug vaginal misoprostol administration for cervical priming 3 h prior to manual vacuum aspiration (MVA) under local anesthesia for voluntary termination of pregnancy before 10 weeks of gestation in comparison with placebo oral or placebo vaginal administration (n=40 in each group). Postmedication cervical dilatation was similar in the oral (mean, 6.6+/-1.5) and vaginal (mean, 7.2+/-0.8) misoprostol groups but significantly higher compared with the oral (mean, 3.4+/-0.2) and vaginal (mean, 3.6+/-1.9) placebo groups. Duration of the procedure was also significantly shorter in the misoprostol groups in comparison with their placebo counterparts. Preoperative bleeding and side effects were more common in the misoprostol groups, but none required medical intervention. Intraoperative bleeding was less in the vaginal misoprostol group compared with the placebo groups. There was no significant difference in terms of visual analogue scores during the procedure, patient satisfaction, days of postoperative bleeding and rate of postoperative complications among the groups. Cervical priming with misoprostol administered orally or vaginally 3 h before MVA for termination of pregnancy under local anesthesia facilitates the procedure by decreasing the need for cervical dilatation and by shortening its duration without improving patients' pain perception and satisfaction mainly due to side effects. PMID:15854633
Johansson, Robert; Björklund, Martin; Hornborg, Christoffer; Karlsson, Stina; Hesser, Hugo; Ljótsson, Brjánn; Rousseau, Andréas; Frederick, Ronald J; Andersson, Gerhard
2013-01-01
Background. Psychodynamic psychotherapy is a psychological treatment approach that has a growing empirical base. Research has indicated an association between therapist-facilitated affective experience and outcome in psychodynamic therapy. Affect-phobia therapy (APT), as outlined by McCullough et al., is a psychodynamic treatment that emphasizes a strong focus on expression and experience of affect. This model has neither been evaluated for depression nor anxiety disorders in a randomized controlled trial. While Internet-delivered psychodynamic treatments for depression and generalized anxiety disorder exist, they have not been based on APT. The aim of this randomized controlled trial was to investigate the efficacy of an Internet-based, psychodynamic, guided self-help treatment based on APT for depression and anxiety disorders. Methods. One hundred participants with diagnoses of mood and anxiety disorders participated in a randomized (1:1 ratio) controlled trial of an active group versus a control condition. The treatment group received a 10-week, psychodynamic, guided self-help treatment based on APT that was delivered through the Internet. The treatment consisted of eight text-based treatment modules and included therapist contact (9.5 min per client and week, on average) in a secure online environment. Participants in the control group also received online therapist support and clinical monitoring of symptoms, but received no treatment modules. Outcome measures were the 9-item Patient Health Questionnaire Depression Scale (PHQ-9) and the 7-item Generalized Anxiety Disorder Scale (GAD-7). Process measures were also included. All measures were administered weekly during the treatment period and at a 7-month follow-up. Results. Mixed models analyses using the full intention-to-treat sample revealed significant interaction effects of group and time on all outcome measures, when comparing treatment to the control group. A large between-group effect size of Cohen's d
Tuneabilities of localized electromagnetic modes in random nanostructures for random lasing
NASA Astrophysics Data System (ADS)
Takeda, S.; Obara, M.
2010-02-01
The modal characteristics of localized electromagnetic waves inside random nanostructures are theoretically presented. It is crucial to know the tuneabilities of the localized modes systematically for demonstrating a specific random lasing application. By use of FDTD (Finite-Difference Time-Domain) method, we investigated the impulse response of two-dimensional random nanostructures consisting of closely packed cylindrical dielectric columns, and precisely analyzed the localized modes. We revealed the tuneability of the frequency of the localized modes by controlling the medium configurations: diameter, spatial density, and refractive index of the cylinders. Furthermore, it is found to be able to tune the Q (quality) factors of the localized modes dramatically by controlling simply the system size of the entire medium. The observed Q factors of approximately 1.6×104 were exhibited in our random disordered structures.
Generalized Random Sequential Adsorption on Erdős-Rényi Random Graphs
NASA Astrophysics Data System (ADS)
Dhara, Souvik; van Leeuwaarden, Johan S. H.; Mukherjee, Debankur
2016-09-01
We investigate random sequential adsorption (RSA) on a random graph via the following greedy algorithm: Order the n vertices at random, and sequentially declare each vertex either active or frozen, depending on some local rule in terms of the state of the neighboring vertices. The classical RSA rule declares a vertex active if none of its neighbors is, in which case the set of active nodes forms an independent set of the graph. We generalize this nearest-neighbor blocking rule in three ways and apply it to the Erdős-Rényi random graph. We consider these generalizations in the large-graph limit n→ ∞ and characterize the jamming constant, the limiting proportion of active vertices in the maximal greedy set.
Casey, S. C.; Patterson, R. L.; Gross, M.; Lickliter, K.; Stein, J. S.
2003-02-25
The U.S. Department of Energy (DOE) is responsible for disposing of transuranic waste in the Waste Isolation Pilot Plant (WIPP) in southeastern New Mexico. As part of that responsibility, DOE must comply with the U.S. Environmental Protection Agency's (EPA) radiation protection standards in Title 40 Code of Federal Regulations (CFR), Parts 191 and 194. This paper addresses compliance with the criteria of 40 CFR Section 194.24(d) and 194.24(f) that require DOE to either provide a waste loading scheme for the WIPP repository or to assume random emplacement in the mandated performance and compliance assessments. The DOE established a position on waste loading schemes during the process of obtaining the EPA's initial Certification in 1998. The justification for utilizing a random waste emplacement distribution within the WIPP repository was provided to the EPA. During the EPA rulemaking process for the initial certification, the EPA questioned DOE on whether waste would be loaded randomly as modeled in long-term performance assessment (PA) and the impact, if any, of nonrandom loading. In response, DOE conducted an impact assessment for non-random waste loading. The results of this assessment supported the contention that it does not matter whether random or non-random waste loading is assumed for the PA. The EPA determined that a waste loading plan was unnecessary because DOE had assumed random waste loading and evaluated the potential consequences of non-random loading for a very high activity waste stream. In other words, the EPA determined that DOE was not required to provide a waste loading scheme because compliance is not affected by the actual distribution of waste containers in the WIPP.
Maximization of Extractable Randomness in a Quantum Random-Number Generator
NASA Astrophysics Data System (ADS)
Haw, J. Y.; Assad, S. M.; Lance, A. M.; Ng, N. H. Y.; Sharma, V.; Lam, P. K.; Symul, T.
2015-05-01
The generation of random numbers via quantum processes is an efficient and reliable method to obtain true indeterministic random numbers that are of vital importance to cryptographic communication and large-scale computer modeling. However, in realistic scenarios, the raw output of a quantum random-number generator is inevitably tainted by classical technical noise. The integrity of the device can be compromised if this noise is tampered with or even controlled by some malicious party. To safeguard against this, we propose and experimentally demonstrate an approach that produces side-information-independent randomness that is quantified by min-entropy conditioned on this classical noise. We present a method for maximizing the conditional min entropy of the number sequence generated from a given quantum-to-classical-noise ratio. The detected photocurrent in our experiment is shown to have a real-time random-number generation rate of 14 (Mb i t /s )/MHz . The spectral response of the detection system shows the potential to deliver more than 70 Gbit /s of random numbers in our experimental setup.
Conformational transitions in random heteropolymer models
NASA Astrophysics Data System (ADS)
Blavatska, Viktoria; Janke, Wolfhard
2014-01-01
We study the conformational properties of heteropolymers containing two types of monomers A and B, modeled as self-attracting self-avoiding random walks on a regular lattice. Such a model can describe in particular the sequences of hydrophobic and hydrophilic residues in proteins [K. F. Lau and K. A. Dill, Macromolecules 22, 3986 (1989)] and polyampholytes with oppositely charged groups [Y. Kantor and M. Kardar, Europhys. Lett. 28, 169 (1994)]. Treating the sequences of the two types of monomers as quenched random variables, we provide a systematic analysis of possible generalizations of this model. To this end we apply the pruned-enriched Rosenbluth chain-growth algorithm, which allows us to obtain the phase diagrams of extended and compact states coexistence as function of both the temperature and fraction of A and B monomers along the heteropolymer chain.
Provable quantum advantage in randomness processing.
Dale, Howard; Jennings, David; Rudolph, Terry
2015-01-01
Quantum advantage is notoriously hard to find and even harder to prove. For example the class of functions computable with classical physics exactly coincides with the class computable quantum mechanically. It is strongly believed, but not proven, that quantum computing provides exponential speed-up for a range of problems, such as factoring. Here we address a computational scenario of randomness processing in which quantum theory provably yields, not only resource reduction over classical stochastic physics, but a strictly larger class of problems which can be solved. Beyond new foundational insights into the nature and malleability of randomness, and the distinction between quantum and classical information, these results also offer the potential of developing classically intractable simulations with currently accessible quantum technologies. PMID:26381816
Random sequential adsorption of starlike particles.
Cieśla, Michał; Karbowniczek, Paweł
2015-04-01
Random packing of surfaceless starlike particles built of 3 to 50 line segments was studied using random sequential adsorption algorithm. Numerical simulations allow us to determine saturated packing densities as well as the first two virial expansion coefficients for such objects. Measured kinetics of the packing growth supports the power law known to be valid for particles with a finite surface; however, the dependence of the exponent in this law on the number of star arms is unexpected. The density autocorrelation function shows fast superexponential decay as for disks, but the typical distance between closest stars is much smaller than between disks of the similar size, especially for a small number of arms. PMID:25974505
Steering random walks with kicked ultracold atoms
NASA Astrophysics Data System (ADS)
Weiß, Marcel; Groiseau, Caspar; Lam, W. K.; Burioni, Raffaella; Vezzani, Alessandro; Summy, Gil S.; Wimberger, Sandro
2015-09-01
The kicking sequence of the atom-optics kicked rotor at quantum resonance can be interpreted as a quantum random walk in momentum space. We show how such a walk can become the basis for nontrivial classical walks by applying a random sequence of intensities and phases of the kicking lattice chosen according to a probability distribution. This distribution converts on average into the final momentum distribution of the kicked atoms. In particular, it is shown that a power-law distribution for the kicking strengths results in a Lévy walk in momentum space and in a power law with the same exponent in the averaged momentum distribution. Furthermore, we investigate the stability of our predictions in the context of a realistic experiment with Bose-Einstein condensates.
Random Test Run Length and Effectiveness
NASA Technical Reports Server (NTRS)
Andrews, James H.; Groce, Alex; Weston, Melissa; Xu, Ru-Gang
2008-01-01
A poorly understood but important factor in many applications of random testing is the selection of a maximum length for test runs. Given a limited time for testing, it is seldom clear whether executing a small number of long runs or a large number of short runs maximizes utility. It is generally expected that longer runs are more likely to expose failures -- which is certainly true with respect to runs shorter than the shortest failing trace. However, longer runs produce longer failing traces, requiring more effort from humans in debugging or more resources for automated minimization. In testing with feedback, increasing ranges for parameters may also cause the probability of failure to decrease in longer runs. We show that the choice of test length dramatically impacts the effectiveness of random testing, and that the patterns observed in simple models and predicted by analysis are useful in understanding effects observed.
Fast Magnetic Micropropellers with Random Shapes.
Vach, Peter J; Fratzl, Peter; Klumpp, Stefan; Faivre, Damien
2015-10-14
Studying propulsion mechanisms in low Reynolds number fluid has implications for many fields, ranging from the biology of motile microorganisms and the physics of active matter to micromixing in catalysis and micro- and nanorobotics. The propulsion of magnetic micropropellers can be characterized by a dimensionless speed, which solely depends on the propeller geometry for a given axis of rotation. However, this dependence has so far been only investigated for helical propeller shapes, which were assumed to be optimal. In order to explore a larger variety of shapes, we experimentally studied the propulsion properties of randomly shaped magnetic micropropellers. Surprisingly, we found that their dimensionless speeds are high on average, comparable to previously reported nanofabricated helical micropropellers. The highest dimensionless speed we observed is higher than that of any previously reported propeller moving in a low Reynolds number fluid, proving that physical random shape generation can be a viable optimization strategy. PMID:26383225
The subtle nature of financial random walks
NASA Astrophysics Data System (ADS)
Bouchaud, Jean-Philippe
2005-06-01
We first review the most important "stylized facts" of financial time series, that turn out to be, to a large extent, universal. We then recall how the multifractal random walk of Bacry, Muzy, and Delour generalizes the standard model of financial price changes and accounts in an elegant way for many of their empirical properties. In a second part, we provide empirical evidence for a very subtle compensation mechanism that underlies the random nature of price changes. This compensation drives the market close to a critical point, that may explain the sensitivity of financial markets to small perturbations, and their propensity to enter bubbles and crashes. We argue that the resulting unpredictability of price changes is very far from the neoclassical view that markets are informationally efficient.
Random fields at a nonequilibrium phase transition.
Barghathi, Hatem; Vojta, Thomas
2012-10-26
We study nonequilibrium phase transitions in the presence of disorder that locally breaks the symmetry between two equivalent macroscopic states. In low-dimensional equilibrium systems, such random-field disorder is known to have dramatic effects: it prevents spontaneous symmetry breaking and completely destroys the phase transition. In contrast, we show that the phase transition of the one-dimensional generalized contact process persists in the presence of random-field disorder. The ultraslow dynamics in the symmetry-broken phase is described by a Sinai walk of the domain walls between two different absorbing states. We discuss the generality and limitations of our theory, and we illustrate our results by large-scale Monte Carlo simulations. PMID:23215170
Nonperturbative dynamical decoupling with random control.
Jing, Jun; Bishop, C Allen; Wu, Lian-Ao
2014-01-01
Parametric fluctuations or stochastic signals are introduced into the rectangular pulse sequence to investigate the feasibility of random dynamical decoupling. In a large parameter region, we find that the out-of-order control pulses work as well as the regular pulses for dynamical decoupling and dissipation suppression. Calculations and analysis are enabled by and based on a nonperturbative dynamical decoupling approach allowed by an exact quantum-state-diffusion equation. When the average frequency and duration of the pulse sequence take proper values, the random control sequence is robust, fault-tolerant, and insensitive to pulse strength deviations and interpulse temporal separation in the quasi-periodic sequence. This relaxes the operational requirements placed on quantum control devices to a great deal. PMID:25169735
Ray propagation in nonuniform random lattices
NASA Astrophysics Data System (ADS)
Martini, Anna; Franceschetti, Massimo; Massa, Andrea
2006-09-01
The problem of optical ray propagation in a nonuniform random half-plane lattice is considered. An external source radiates a planar monochromatic wave impinging at an angle θ on a half-plane random grid where each cell can be independently occupied with probability qj=1-pj,j being the row index. The wave undergoes specular reflections on the occupied cells, and the probability of penetrating up to level k inside the lattice is analytically estimated. Numerical experiments validate the proposed approach and show improvement upon previous results that appeared in the literature. Applications are in the field of remote sensing and communications, where estimation of the penetration of electromagnetic waves in disordered media is of interest.
Brownian motion on random dynamical landscapes
NASA Astrophysics Data System (ADS)
Suñé Simon, Marc; Sancho, José María; Lindenberg, Katja
2016-03-01
We present a study of overdamped Brownian particles moving on a random landscape of dynamic and deformable obstacles (spatio-temporal disorder). The obstacles move randomly, assemble, and dissociate following their own dynamics. This landscape may account for a soft matter or liquid environment in which large obstacles, such as macromolecules and organelles in the cytoplasm of a living cell, or colloids or polymers in a liquid, move slowly leading to crowding effects. This representation also constitutes a novel approach to the macroscopic dynamics exhibited by active matter media. We present numerical results on the transport and diffusion properties of Brownian particles under this disorder biased by a constant external force. The landscape dynamics are characterized by a Gaussian spatio-temporal correlation, with fixed time and spatial scales, and controlled obstacle concentrations.
Randomly oriented carbon/carbon composite
NASA Astrophysics Data System (ADS)
Raunija, Thakur Sudesh Kumar; Babu, S.
2013-06-01
The main objective of this study is to develop an alternate, rapid and cost effective process for the fabrication of carbon/carbon (C/C) composite. Slurry moulding technique is adopted for the fabrication of C/C composite. Randomly oriented hybrid discrete carbon fiber (CF) reinforced and mesophase pitch (MP) derived matrix C/C composite is fabricated. Process parameters are optimized and repeatability is proved. The electrical conductivity of the composite fabricated through the developed process is found to be better than that fabricated through conventional processes. The other properties are also found to be competent. The randomly oriented C/C composite because of its mouldability is found suitable for various applications which require complex shapes.
Directed Random Markets: Connectivity Determines Money
NASA Astrophysics Data System (ADS)
Martínez-Martínez, Ismael; López-Ruiz, Ricardo
2013-12-01
Boltzmann-Gibbs (BG) distribution arises as the statistical equilibrium probability distribution of money among the agents of a closed economic system where random and undirected exchanges are allowed. When considering a model with uniform savings in the exchanges, the final distribution is close to the gamma family. In this paper, we implement these exchange rules on networks and we find that these stationary probability distributions are robust and they are not affected by the topology of the underlying network. We introduce a new family of interactions: random but directed ones. In this case, it is found the topology to be determinant and the mean money per economic agent is related to the degree of the node representing the agent in the network. The relation between the mean money per economic agent and its degree is shown to be linear.
Bell experiments with random destination sources
Sciarrino, Fabio; Mataloni, Paolo; Vallone, Giuseppe; Cabello, Adan
2011-03-15
It is generally assumed that sources randomly sending two particles to one or two different observers, random destination sources (RDSs), cannot be used for genuine quantum nonlocality tests because of the postselection loophole. We demonstrate that Bell experiments not affected by the postselection loophole may be performed with (i) an RDS and local postselection using perfect detectors, (ii) an RDS, local postselection, and fair sampling assumption with any detection efficiency, and (iii) an RDS and a threshold detection efficiency required to avoid the detection loophole. These results allow the adoption of RDS setups which are simpler and more efficient for long-distance free-space Bell tests, and extend the range of physical systems which can be used for loophole-free Bell tests.
Random walk centrality in interconnected multilayer networks
NASA Astrophysics Data System (ADS)
Solé-Ribalta, Albert; De Domenico, Manlio; Gómez, Sergio; Arenas, Alex
2016-06-01
Real-world complex systems exhibit multiple levels of relationships. In many cases they require to be modeled as interconnected multilayer networks, characterizing interactions of several types simultaneously. It is of crucial importance in many fields, from economics to biology and from urban planning to social sciences, to identify the most (or the less) influent nodes in a network using centrality measures. However, defining the centrality of actors in interconnected complex networks is not trivial. In this paper, we rely on the tensorial formalism recently proposed to characterize and investigate this kind of complex topologies, and extend two well known random walk centrality measures, the random walk betweenness and closeness centrality, to interconnected multilayer networks. For each of the measures we provide analytical expressions that completely agree with numerically results.
Informed Consent and Cluster-Randomized Trials
Dawson, Angus
2012-01-01
We argue that cluster-randomized trials are an important methodology, essential to the evaluation of many public health interventions. However, in the case of at least some cluster-randomized trials, it is not possible, or is incompatible with the aims of the study, to obtain individual informed consent. This should not necessarily be seen as an impediment to ethical approval, providing that sufficient justification is given for this omission. We further argue that it should be the institutional review board’s task to evaluate whether the protocol is sufficiently justified to proceed without consent and that this is preferable to any reliance on community consent or other means of proxy consent. PMID:22390511
Fast Magnetic Micropropellers with Random Shapes
2015-01-01
Studying propulsion mechanisms in low Reynolds number fluid has implications for many fields, ranging from the biology of motile microorganisms and the physics of active matter to micromixing in catalysis and micro- and nanorobotics. The propulsion of magnetic micropropellers can be characterized by a dimensionless speed, which solely depends on the propeller geometry for a given axis of rotation. However, this dependence has so far been only investigated for helical propeller shapes, which were assumed to be optimal. In order to explore a larger variety of shapes, we experimentally studied the propulsion properties of randomly shaped magnetic micropropellers. Surprisingly, we found that their dimensionless speeds are high on average, comparable to previously reported nanofabricated helical micropropellers. The highest dimensionless speed we observed is higher than that of any previously reported propeller moving in a low Reynolds number fluid, proving that physical random shape generation can be a viable optimization strategy. PMID:26383225
Random phase textures: theory and synthesis.
Galerne, Bruno; Gousseau, Yann; Morel, Jean-Michel
2011-01-01
This paper explores the mathematical and algorithmic properties of two sample-based texture models: random phase noise (RPN) and asymptotic discrete spot noise (ADSN). These models permit to synthesize random phase textures. They arguably derive from linearized versions of two early Julesz texture discrimination theories. The ensuing mathematical analysis shows that, contrarily to some statements in the literature, RPN and ADSN are different stochastic processes. Nevertheless, numerous experiments also suggest that the textures obtained by these algorithms from identical samples are perceptually similar. The relevance of this study is enhanced by three technical contributions providing solutions to obstacles that prevented the use of RPN or ADSN to emulate textures. First, RPN and ADSN algorithms are extended to color images. Second, a preprocessing is proposed to avoid artifacts due to the nonperiodicity of real-world texture samples. Finally, the method is extended to synthesize textures with arbitrary size from a given sample. PMID:20550995
Random generation of structured linear optimization problems
Arthur, J.; Frendewey, J. Jr.
1994-12-31
We describe the on-going development of a random generator for linear optimization problems (LPs) founded on the concept of block structure. The general LP: minimize z = cx subject to Ax = b, x {ge} 0 can take a variety of special forms determined (primarily) by predefined structures on the matrix A of constraint coefficients. The authors have developed several random problem generators which provide instances of LPs having such structure; in particular (i) general (non-structured) problems, (ii) generalized upper bound (GUB) constraints, (iii) minimum cost network flow problems, (iv) transportation and assignment problems, (v) shortest path problems, (vi) generalized network flow problems, and (vii) multicommodity network flow problems. This paper discusses the general philosophy behind the construction of these generators. In addition, the task of combining the generators into a single generator -- in which the matrix A can contain various blocks, each of a prescribed structure from those mentioned above -- is described.
Faddeev random-phase approximation for molecules
Degroote, Matthias; Van Neck, Dimitri; Barbieri, Carlo
2011-04-15
The Faddeev random-phase approximation is a Green's function technique that makes use of Faddeev equations to couple the motion of a single electron to the two-particle-one-hole and two-hole-one-particle excitations. This method goes beyond the frequently used third-order algebraic diagrammatic construction method: all diagrams involving the exchange of phonons in the particle-hole and particle-particle channel are retained, but the phonons are now described at the level of the random-phase approximation, which includes ground-state correlations, rather than at the Tamm-Dancoff approximation level, where ground-state correlations are excluded. Previously applied to atoms, this paper presents results for small molecules at equilibrium geometry.
Random-effects models for longitudinal data
Laird, N.M.; Ware, J.H.
1982-12-01
Models for the analysis of longitudinal data must recognize the relationship between serial observations on the same unit. Multivariate models with general covariance structure are often difficult to apply to highly unbalanced data, whereas two-stage random-effects models can be used easily. In two-stage models, the probability distributions for the response vectors of different individuals belong to a single family, but some random-effects parameters vary across individuals, with a distribution specified at the second stage. A general family of models is discussed, which includes both growth models and repeated-measures models as special cases. A unified approach to fitting these models, based on a combination of empirical Bayes and maximum likelihood estimation of model parameters and using the EM algorithm, is discussed. Two examples are taken from a current epidemiological study of the health effects of air pollution.
Product, generic, and random generic quantum satisfiability
Laumann, C. R.; Sondhi, S. L.; Laeuchli, A. M.; Moessner, R.; Scardicchio, A.
2010-06-15
We report a cluster of results on k-QSAT, the problem of quantum satisfiability for k-qubit projectors which generalizes classical satisfiability with k-bit clauses to the quantum setting. First we define the NP-complete problem of product satisfiability and give a geometrical criterion for deciding when a QSAT interaction graph is product satisfiable with positive probability. We show that the same criterion suffices to establish quantum satisfiability for all projectors. Second, we apply these results to the random graph ensemble with generic projectors and obtain improved lower bounds on the location of the SAT-unSAT transition. Third, we present numerical results on random, generic satisfiability which provide estimates for the location of the transition for k=3 and k=4 and mild evidence for the existence of a phase which is satisfiable by entangled states alone.
Random graphs containing arbitrary distributions of subgraphs
NASA Astrophysics Data System (ADS)
Karrer, Brian; Newman, M. E. J.
2010-12-01
Traditional random graph models of networks generate networks that are locally treelike, meaning that all local neighborhoods take the form of trees. In this respect such models are highly unrealistic, most real networks having strongly nontreelike neighborhoods that contain short loops, cliques, or other biconnected subgraphs. In this paper we propose and analyze a class of random graph models that incorporates general subgraphs, allowing for nontreelike neighborhoods while still remaining solvable for many fundamental network properties. Among other things we give solutions for the size of the giant component, the position of the phase transition at which the giant component appears, and percolation properties for both site and bond percolation on networks generated by the model.
Resolving social dilemmas on evolving random networks
NASA Astrophysics Data System (ADS)
Szolnoki, Attila; Perc, Matjaž
2009-05-01
We show that strategy-independent adaptations of random interaction networks can induce powerful mechanisms, ranging from the Red Queen to group selection, which promote cooperation in evolutionary social dilemmas. These two mechanisms emerge spontaneously as dynamical processes due to deletions and additions of links, which are performed whenever players adopt new strategies and after a certain number of game iterations, respectively. The potency of cooperation promotion, as well as the mechanism responsible for it, can thereby be tuned via a single parameter determining the frequency of link additions. We thus demonstrate that coevolving random networks may evoke an appropriate mechanism for each social dilemma, such that cooperation prevails even in highly unfavorable conditions.
Residual Defect Density in Random Disks Deposits
Topic, Nikola; Pöschel, Thorsten; Gallas, Jason A. C.
2015-01-01
We investigate the residual distribution of structural defects in very tall packings of disks deposited randomly in large channels. By performing simulations involving the sedimentation of up to 50 × 109 particles we find all deposits to consistently show a non-zero residual density of defects obeying a characteristic power-law as a function of the channel width. This remarkable finding corrects the widespread belief that the density of defects should vanish algebraically with growing height. A non-zero residual density of defects implies a type of long-range spatial order in the packing, as opposed to only local ordering. In addition, we find deposits of particles to involve considerably less randomness than generally presumed. PMID:26235809
Non-volatile magnetic random access memory
NASA Technical Reports Server (NTRS)
Katti, Romney R. (Inventor); Stadler, Henry L. (Inventor); Wu, Jiin-Chuan (Inventor)
1994-01-01
Improvements are made in a non-volatile magnetic random access memory. Such a memory is comprised of an array of unit cells, each having a Hall-effect sensor and a thin-film magnetic element made of material having an in-plane, uniaxial anisotropy and in-plane, bipolar remanent magnetization states. The Hall-effect sensor is made more sensitive by using a 1 m thick molecular beam epitaxy grown InAs layer on a silicon substrate by employing a GaAs/AlGaAs/InAlAs superlattice buffering layer. One improvement avoids current shunting problems of matrix architecture. Another improvement reduces the required magnetizing current for the micromagnets. Another improvement relates to the use of GaAs technology wherein high electron-mobility GaAs MESFETs provide faster switching times. Still another improvement relates to a method for configuring the invention as a three-dimensional random access memory.
Conformational transitions in random heteropolymer models.
Blavatska, Viktoria; Janke, Wolfhard
2014-01-21
We study the conformational properties of heteropolymers containing two types of monomers A and B, modeled as self-attracting self-avoiding random walks on a regular lattice. Such a model can describe in particular the sequences of hydrophobic and hydrophilic residues in proteins [K. F. Lau and K. A. Dill, Macromolecules 22, 3986 (1989)] and polyampholytes with oppositely charged groups [Y. Kantor and M. Kardar, Europhys. Lett. 28, 169 (1994)]. Treating the sequences of the two types of monomers as quenched random variables, we provide a systematic analysis of possible generalizations of this model. To this end we apply the pruned-enriched Rosenbluth chain-growth algorithm, which allows us to obtain the phase diagrams of extended and compact states coexistence as function of both the temperature and fraction of A and B monomers along the heteropolymer chain. PMID:25669411
Diffraction of Random Noble Means Words
NASA Astrophysics Data System (ADS)
Moll, Markus
2014-09-01
In this paper, several aspects of the random noble means substitution are studied. Beyond important dynamical facets such as the frequency of subwords and the computation of the topological entropy, the important issue of ergodicity is addressed. From the geometrical point of view, we outline a suitable cut and project setting for associated point sets and present results for the spectral analysis of the diffraction measure.
Relativistic diffusive motion in random electromagnetic fields
NASA Astrophysics Data System (ADS)
Haba, Z.
2011-08-01
We show that the relativistic dynamics in a Gaussian random electromagnetic field can be approximated by the relativistic diffusion of Schay and Dudley. Lorentz invariant dynamics in the proper time leads to the diffusion in the proper time. The dynamics in the laboratory time gives the diffusive transport equation corresponding to the Jüttner equilibrium at the inverse temperature β-1 = mc2. The diffusion constant is expressed by the field strength correlation function (Kubo's formula).
Magnetic Analog Random-Access Memory
NASA Technical Reports Server (NTRS)
Katti, Romney R.; Wu, Jiin-Chuan; Stadler, Henry L.
1991-01-01
Proposed integrated, solid-state, analog random-access memory base on principle of magnetic writing and magnetoresistive reading. Current in writing conductor magnetizes storage layer. Remanent magnetization in storage layer penetrates readout layer and detected by magnetoresistive effect or Hall effect. Memory cells are part of integrated circuit including associated reading and writing transistors. Intended to provide high storage density and rapid access, nonvolatile, consumes little power, and relatively invulnerable to ionizing radiation.
Stability of SIRS system with random perturbations
NASA Astrophysics Data System (ADS)
Lu, Qiuying
2009-09-01
Epidemiological models with bilinear incidence rate λSI usually have an asymptotically stable trivial equilibrium corresponding to the disease-free state, or an asymptotically stable non-trivial equilibrium (i.e. interior equilibrium) corresponding to the endemic state. In this paper, we consider an epidemiological model, which is an SIRS model with or without distributed time delay influenced by random perturbations. We present the stability conditions of the disease-free equilibrium of the associated stochastic SIRS system.
Random laser action in bovine semen
NASA Astrophysics Data System (ADS)
Smuk, Andrei; Lazaro, Edgar; Olson, Leif P.; Lawandy, N. M.
2011-03-01
Experiments using bovine semen reveal that the addition of a high-gain water soluble dye results in random laser action when excited by a Q-switched, frequency doubled, Nd:Yag laser. The data shows that the linewidth collapse of the emission is correlated to the sperm count of the individual samples, potentially making this a rapid, low sample volume approach to count determination.
A Random Walk Picture of Basketball
NASA Astrophysics Data System (ADS)
Gabel, Alan; Redner, Sidney
2012-02-01
We analyze NBA basketball play-by-play data and found that scoring is well described by a weakly-biased, anti-persistent, continuous-time random walk. The time between successive scoring events follows an exponential distribution, with little memory between events. We account for a wide variety of statistical properties of scoring, such as the distribution of the score difference between opponents and the fraction of game time that one team is in the lead.
Extended series expansions for random sequential adsorption
NASA Astrophysics Data System (ADS)
Gan, Chee Kwan; Wang, Jian-Sheng
1998-02-01
We express the coverage (occupation fraction) θ in powers of time t for four models of two-dimensional lattice random sequential adsorption (RSA) to very high orders by improving an algorithm developed by the present authors [J. Phys. A 29, L177 (1996)]. Each of these series is, to the best of our knowledge, the longest at the present. We analyze the series and deduce accurate estimates for the jamming coverage of the models.
Random coding strategies for minimum entropy
NASA Technical Reports Server (NTRS)
Posner, E. C.
1975-01-01
This paper proves that there exists a fixed random coding strategy for block coding a memoryless information source to achieve the absolute epsilon entropy of the source. That is, the strategy can be chosen independent of the block length. The principal new tool is an easy result on the semicontinuity of the relative entropy functional of one probability distribution with respect to another. The theorem generalizes a result from rate-distortion theory to the 'zero-infinity' case.
Quantum games on evolving random networks
NASA Astrophysics Data System (ADS)
Pawela, Łukasz
2016-09-01
We study the advantages of quantum strategies in evolutionary social dilemmas on evolving random networks. We focus our study on the two-player games: prisoner's dilemma, snowdrift and stag-hunt games. The obtained result show the benefits of quantum strategies for the prisoner's dilemma game. For the other two games, we obtain regions of parameters where the quantum strategies dominate, as well as regions where the classical strategies coexist.
Constructing acoustic timefronts using random matrix theory.
Hegewisch, Katherine C; Tomsovic, Steven
2013-10-01
In a recent letter [Hegewisch and Tomsovic, Europhys. Lett. 97, 34002 (2012)], random matrix theory is introduced for long-range acoustic propagation in the ocean. The theory is expressed in terms of unitary propagation matrices that represent the scattering between acoustic modes due to sound speed fluctuations induced by the ocean's internal waves. The scattering exhibits a power-law decay as a function of the differences in mode numbers thereby generating a power-law, banded, random unitary matrix ensemble. This work gives a more complete account of that approach and extends the methods to the construction of an ensemble of acoustic timefronts. The result is a very efficient method for studying the statistical properties of timefronts at various propagation ranges that agrees well with propagation based on the parabolic equation. It helps identify which information about the ocean environment can be deduced from the timefronts and how to connect features of the data to that environmental information. It also makes direct connections to methods used in other disordered waveguide contexts where the use of random matrix theory has a multi-decade history. PMID:24116514
Physical Principle for Generation of Randomness
NASA Technical Reports Server (NTRS)
Zak, Michail
2009-01-01
A physical principle (more precisely, a principle that incorporates mathematical models used in physics) has been conceived as the basis of a method of generating randomness in Monte Carlo simulations. The principle eliminates the need for conventional random-number generators. The Monte Carlo simulation method is among the most powerful computational methods for solving high-dimensional problems in physics, chemistry, economics, and information processing. The Monte Carlo simulation method is especially effective for solving problems in which computational complexity increases exponentially with dimensionality. The main advantage of the Monte Carlo simulation method over other methods is that the demand on computational resources becomes independent of dimensionality. As augmented by the present principle, the Monte Carlo simulation method becomes an even more powerful computational method that is especially useful for solving problems associated with dynamics of fluids, planning, scheduling, and combinatorial optimization. The present principle is based on coupling of dynamical equations with the corresponding Liouville equation. The randomness is generated by non-Lipschitz instability of dynamics triggered and controlled by feedback from the Liouville equation. (In non-Lipschitz dynamics, the derivatives of solutions of the dynamical equations are not required to be bounded.)
Local volume fraction fluctuations in random media
Quintanilla, J.; Torquato, S.
1997-02-01
Although the volume fraction is a constant for a statistically homogeneous random medium, on a spatially local level it fluctuates. We study the full distribution of volume fraction within an observation window of finite size for models of random media. A formula due to Lu and Torquato for the standard deviation or {open_quotes}coarseness{close_quotes} associated with the {ital local} volume fraction {xi} is extended for the nth moment of {xi} for any n. The distribution function F{sub L} of the local volume fraction of five different model microstructures is evaluated using analytical and computer-simulation methods for a wide range of window sizes and overall volume fractions. On the line, we examine a system of fully penetrable rods and a system of totally impenetrable rods formed by random sequential addition (RSA). In the plane, we study RSA totally impenetrable disks and fully penetrable aligned squares. In three dimensions, we study fully penetrable aligned cubes. In the case of fully penetrable rods, we will also simplify and numerically invert a prior analytical result for the Laplace transform of F{sub L}. In all of these models, we show that, for sufficiently large window sizes, F{sub L} can be reasonably approximated by the normal distribution. {copyright} {ital 1997 American Institute of Physics.}
Polar motion under anisotropic random load
NASA Astrophysics Data System (ADS)
Tsurkis, I. Ya.; Kuchai, M. S.; Sinyukhina, S. V.
2014-01-01
The probabilistic approach to the description of the Chandler wobble is expanded to the case of anisotropic random load. The polar motion is treated as a two-dimensional (2D) Markov process—the solution of the Liouville equation—with discrete time. It is shown that with a sufficiently large time step Δ, the polar motion can be considered as an isotropic process irrespective of the particular ratio between the eigenvalues of the diffusion matrix, which characterizes the right-hand side of this equation (random load). The problem of reaching the boundary of the domain [ E min, E max] by the energy of the pole E( t) = x {1/2}+ x {2/2} is considered. With a time step Δ of 1 year and the length of the time series of the observations N = 150, the correction for anisotropy to the total probability P* of a drop by a factor of five in the amplitude of the Chandler wobble A = √ E does not exceed 10-2, and the probability P* is above 0.3 (if the Q-factor of the mantle is below 500). Thus, it is demonstrated that the observed variations in amplitude A( t) can be explained in the context of the probabilistic approach without hypothesizing the isotropy of the random load.
The resistance of randomly grown trees
NASA Astrophysics Data System (ADS)
Colman, E. R.; Rodgers, G. J.
2011-12-01
An electrical network with the structure of a random tree is considered: starting from a root vertex, in one iteration each leaf (a vertex with zero or one adjacent edges) of the tree is extended by either a single edge with probability p or two edges with probability 1 - p. With each edge having a resistance equal to 1Ω, the total resistance Rn between the root vertex and a busbar connecting all the vertices at the nth level is considered. A dynamical system is presented which approximates Rn, it is shown that the mean value
Variational Infinite Hidden Conditional Random Fields.
Bousmalis, Konstantinos; Zafeiriou, Stefanos; Morency, Louis-Philippe; Pantic, Maja; Ghahramani, Zoubin
2015-09-01
Hidden conditional random fields (HCRFs) are discriminative latent variable models which have been shown to successfully learn the hidden structure of a given classification problem. An Infinite hidden conditional random field is a hidden conditional random field with a countably infinite number of hidden states, which rids us not only of the necessity to specify a priori a fixed number of hidden states available but also of the problem of overfitting. Markov chain Monte Carlo (MCMC) sampling algorithms are often employed for inference in such models. However, convergence of such algorithms is rather difficult to verify, and as the complexity of the task at hand increases the computational cost of such algorithms often becomes prohibitive. These limitations can be overcome by variational techniques. In this paper, we present a generalized framework for infinite HCRF models, and a novel variational inference approach on a model based on coupled Dirichlet Process Mixtures, the HCRF-DPM. We show that the variational HCRF-DPM is able to converge to a correct number of represented hidden states, and performs as well as the best parametric HCRFs-chosen via cross-validation-for the difficult tasks of recognizing instances of agreement, disagreement, and pain in audiovisual sequences. PMID:26353136
Liquid Crystal Ordering of Random DNA Oligomers
NASA Astrophysics Data System (ADS)
Bellini, Tommaso; Zanchetta, Giuliano; Fraccia, Tommaso; Cerbino, Roberto; Tsai, Ethan; Moran, Mark; Smith, Gregory; Walba, David; Clark, Noel
2012-02-01
Concentrated solutions of DNA oligomers (6 to 20 base pairs) organize into chiral nematic (NEM) and columnar (COL) liquid crystal (LC) phases. When the oligomer duplexes are mixed with single strands, LC phase formation proceeds through macroscopic phase separation, as a consequence of the combination of various self-assembly processes including strand pairing, reversible linear aggregation, demixing and LC ordering. We extended our investigation to the case of LC ordering in oligonucleotides whose sequences are partially or entirely randomly chosen, and we observed LC phases even in entirely random 20mers, corresponding to a family of 4^20 10^12 different sequences. We have tracked the origin of this behaviour: random sequences pair into generally defected duplexes, a large fraction of them terminating with stretches of unpaired bases (overhangs); overhangs promote linear aggregation of duplexes, with a mean strength depending on the overhang length; LC formation is accompanied by a phase separation where the duplexes with longer overhangs aggregate to form COL LC domains that coexist with an isotropic fluid rich in duplexes whose structure cannot aggregate.
Adaptive Random Testing with Combinatorial Input Domain
Lu, Yansheng
2014-01-01
Random testing (RT) is a fundamental testing technique to assess software reliability, by simply selecting test cases in a random manner from the whole input domain. As an enhancement of RT, adaptive random testing (ART) has better failure-detection capability and has been widely applied in different scenarios, such as numerical programs, some object-oriented programs, and mobile applications. However, not much work has been done on the effectiveness of ART for the programs with combinatorial input domain (i.e., the set of categorical data). To extend the ideas to the testing for combinatorial input domain, we have adopted different similarity measures that are widely used for categorical data in data mining and have proposed two similarity measures based on interaction coverage. Then, we propose a new version named ART-CID as an extension of ART in combinatorial input domain, which selects an element from categorical data as the next test case such that it has the lowest similarity against already generated test cases. Experimental results show that ART-CID generally performs better than RT, with respect to different evaluation metrics. PMID:24772036
Kronberg, J.W.
1993-04-20
An apparatus for selecting at random one item of N items on the average comprising counter and reset elements for counting repeatedly between zero and N, a number selected by the user, a circuit for activating and deactivating the counter, a comparator to determine if the counter stopped at a count of zero, an output to indicate an item has been selected when the count is zero or not selected if the count is not zero. Randomness is provided by having the counter cycle very often while varying the relatively longer duration between activation and deactivation of the count. The passive circuit components of the activating/deactivating circuit and those of the counter are selected for the sensitivity of their response to variations in temperature and other physical characteristics of the environment so that the response time of the circuitry varies. Additionally, the items themselves, which may be people, may vary in shape or the time they press a pushbutton, so that, for example, an ultrasonic beam broken by the item or person passing through it will add to the duration of the count and thus to the randomness of the selection.
Kronberg, James W.
1993-01-01
An apparatus for selecting at random one item of N items on the average comprising counter and reset elements for counting repeatedly between zero and N, a number selected by the user, a circuit for activating and deactivating the counter, a comparator to determine if the counter stopped at a count of zero, an output to indicate an item has been selected when the count is zero or not selected if the count is not zero. Randomness is provided by having the counter cycle very often while varying the relatively longer duration between activation and deactivation of the count. The passive circuit components of the activating/deactivating circuit and those of the counter are selected for the sensitivity of their response to variations in temperature and other physical characteristics of the environment so that the response time of the circuitry varies. Additionally, the items themselves, which may be people, may vary in shape or the time they press a pushbutton, so that, for example, an ultrasonic beam broken by the item or person passing through it will add to the duration of the count and thus to the randomness of the selection.
A parallel algorithm for random searches
NASA Astrophysics Data System (ADS)
Wosniack, M. E.; Raposo, E. P.; Viswanathan, G. M.; da Luz, M. G. E.
2015-11-01
We discuss a parallelization procedure for a two-dimensional random search of a single individual, a typical sequential process. To assure the same features of the sequential random search in the parallel version, we analyze the former spatial patterns of the encountered targets for different search strategies and densities of homogeneously distributed targets. We identify a lognormal tendency for the distribution of distances between consecutively detected targets. Then, by assigning the distinct mean and standard deviation of this distribution for each corresponding configuration in the parallel simulations (constituted by parallel random walkers), we are able to recover important statistical properties, e.g., the target detection efficiency, of the original problem. The proposed parallel approach presents a speedup of nearly one order of magnitude compared with the sequential implementation. This algorithm can be easily adapted to different instances, as searches in three dimensions. Its possible range of applicability covers problems in areas as diverse as automated computer searchers in high-capacity databases and animal foraging.
Component evolution in general random intersection graphs
Bradonjic, Milan; Hagberg, Aric; Hengartner, Nick; Percus, Allon G
2010-01-01
We analyze component evolution in general random intersection graphs (RIGs) and give conditions on existence and uniqueness of the giant component. Our techniques generalize the existing methods for analysis on component evolution in RIGs. That is, we analyze survival and extinction properties of a dependent, inhomogeneous Galton-Watson branching process on general RIGs. Our analysis relies on bounding the branching processes and inherits the fundamental concepts from the study on component evolution in Erdos-Renyi graphs. The main challenge becomes from the underlying structure of RIGs, when the number of offsprings follows a binomial distribution with a different number of nodes and different rate at each step during the evolution. RIGs can be interpreted as a model for large randomly formed non-metric data sets. Besides the mathematical analysis on component evolution, which we provide in this work, we perceive RIGs as an important random structure which has already found applications in social networks, epidemic networks, blog readership, or wireless sensor networks.
Randomness and multilevel interactions in biology.
Buiatti, Marcello; Longo, Giuseppe
2013-09-01
The dynamic instability of living systems and the "superposition" of different forms of randomness are viewed, in this paper, as components of the contingently changing, or even increasing, organization of life through ontogenesis or evolution. To this purpose, we first survey how classical and quantum physics define randomness differently. We then discuss why this requires, in our view, an enriched understanding of the effects of their concurrent presence in biological systems' dynamics. Biological randomness is then presented not only as an essential component of the heterogeneous determination and intrinsic unpredictability proper to life phenomena, due to the nesting of, and interaction between many levels of organization, but also as a key component of its structural stability. We will note as well that increasing organization, while increasing "order", induces growing disorder, not only by energy dispersal effects, but also by increasing variability and differentiation. Finally, we discuss the cooperation between diverse components in biological networks; this cooperation implies the presence of constraints due to the particular nature of bio-entanglement and bio-resonance, two notions to be reviewed and defined in the paper. PMID:23637008
Networked Dynamic Systems: Identification, Controllability, and Randomness
NASA Astrophysics Data System (ADS)
Nabi-Abdolyousefi, Marzieh
The presented dissertation aims to develop a graph-centric framework for the analysis and synthesis of networked dynamic systems (NDS) consisting of multiple dynamic units that interact via an interconnection topology. We examined three categories of network problems, namely, identification, controllability, and randomness. In network identification, as a subclass of inverse problems, we made an explicit relation between the input-output behavior of an NDS and the underlying interacting network. In network controllability, we provided structural and algebraic insights into features of the network that enable external signal(s) to control the state of the nodes in the network for certain classes of interconnections, namely, path, circulant, and Cartesian networks. We also examined the relation between network controllability and the symmetry structure of the graph. Motivated by the analysis results for the controllability and observability of deterministic networks, a natural question is whether randomness in the network layer or in the layer of inputs and outputs generically leads to favorable system theoretic properties. In this direction, we examined system theoretic properties of random networks including controllability, observability, and performance of optimal feedback controllers and estimators. We explored some of the ramifications of such an analysis framework in opinion dynamics over social networks and sensor networks in estimating the real-time position of a Seaglider from experimental data.
Dynamic Random Networks in Dynamic Populations
NASA Astrophysics Data System (ADS)
Britton, Tom; Lindholm, Mathias
2010-05-01
We consider a random network evolving in continuous time in which new nodes are born and old may die, and where undirected edges between nodes are created randomly and may also disappear. The node population is Markovian and so is the creation and deletion of edges, given the node population. Each node is equipped with a random social index and the intensity at which a node creates new edges is proportional to the social index, and the neighbour is either chosen uniformly or proportional to its social index in a modification of the model. We derive properties of the network as time and the node population tends to infinity. In particular, the degree-distribution is shown to be a mixed Poisson distribution which may exhibit a heavy tail (e.g. power-law) if the social index distribution has a heavy tail. The limiting results are verified by means of simulations, and the model is fitted to a network of sexual contacts.
Random walks on generalized Koch networks
NASA Astrophysics Data System (ADS)
Sun, Weigang
2013-10-01
For deterministically growing networks, it is a theoretical challenge to determine the topological properties and dynamical processes. In this paper, we study random walks on generalized Koch networks with features that include an initial state that is a globally connected network to r nodes. In each step, every existing node produces m complete graphs. We then obtain the analytical expressions for first passage time (FPT), average return time (ART), i.e. the average of FPTs for random walks from node i to return to the starting point i for the first time, and average sending time (AST), defined as the average of FPTs from a hub node to all other nodes, excluding the hub itself with regard to network parameters m and r. For this family of Koch networks, the ART of the new emerging nodes is identical and increases with the parameters m or r. In addition, the AST of our networks grows with network size N as N ln N and also increases with parameter m. The results obtained in this paper are the generalizations of random walks for the original Koch network.