Ktismatics

18 January 2011

Academically Adrift

Filed under: Culture, Psychology — ktismatics @ 1:48 pm

Here’s the summary of a new book showing that US university students don’t learn much while they’re in school. The study, conducted by two sociologists, focused on the acquisition not of specific content but of critical thinking and analytical reasoning. While students showed improvement over four years of college, the improvements, say the researchers, weren’t large: on average, seniors scored about half a standard deviation higher than newly-enrolled freshmen.

According to the statistical rule of thumb, a 0.5 standard deviation change constitutes a moderate effect, so maybe the researchers are being unduly pessimistic. As far as I can tell, however, the researchers didn’t compare seniors with an age-matched cohort of non-students, so it’s not possible to distinguish school learning from other learning experiences from maturation. The write-up also doesn’t say whether the study accounted for dropouts — it’s possible that those students who make it all the way through to graduation score better not because they learn in school but because they have more innate aptitude for abstraction and analysis.

Some other key findings cited from the article:

The main culprit for lack of academic progress of students, according to the authors, is a lack of rigor. They review data from student surveys to show, for example, that 32 percent of students each semester do not take any courses with more than 40 pages of reading assigned a week, and that half don’t take a single course in which they must write more than 20 pages over the course of a semester. Further, the authors note that students spend, on average, only about 12-14 hours a week studying, and that much of this time is studying in groups.

The research then goes on to find a direct relationship between rigor and gains in learning:

  • Students who study by themselves for more hours each week gain more knowledge — while those who spend more time studying in peer groups see diminishing gains.
  • Students whose classes reflect high expectations (more than 40 pages of reading a week and more than 20 pages of writing a semester) gained more than other students.
  • Students who spend more time in fraternities and sororities show smaller gains than other students.
  • Students who engage in off-campus or extracurricular activities (including clubs and volunteer opportunities) have no notable gains or losses in learning.
  • Students majoring in liberal arts fields see “significantly higher gains in critical thinking, complex reasoning, and writing skills over time than students in other fields of study.” Students majoring in business, education, social work and communications showed the smallest gains. (The authors note that this could be more a reflection of more-demanding reading and writing assignments, on average, in the liberal arts courses than of the substance of the material.)

“[E]ducational practices associated with academic rigor improved student performance, while collegiate experiences associated with social engagement did not,” the authors write.

Advertisements

17 Comments »

  1. I was curious about the metrics used – i.e., I see no particular reason why the number of pages of assigned reading, or particularly the number of pages of assigned writing, is a good universal metric for rigour. It works fine for social science and humanities courses, but not for, say, courses in mathematics or the sciences, which might be assessed by exams, a practicum, or some other sort of assessment. The fact that these metrics are highlighted makes it a bit unsurprising that liberal arts students would perform better: their programs are more likely to require long reading and writing. I’m curious whether the measure of “critical thinking” used for the study is sufficiently differentiated to capture “critical thinking”, as opposed to capturing some other aspect of verbal fluency?

    I don’t have a particular dog in this fight – I’m one of the folks who tends to assign lots of reading and writing… ;-P And I’ve not read the study, so am reacting only to the surface detail in the summary…

    But on its face I find it a bit of a red flag that the summary seems to group critical thinking and complex reasoning (which in different forms are generally valuable), with “writing skills”. If students are measured in a way that requires they possess good writing skills, in order for the measurement instrument to be able to pick up good critical thinking and complex reasoning skills, the instrument could be responsible both for the low results, and for the fact that liberal arts students perform comparatively better…

    This isn’t getting into the implications of assessing the impact of university education after specifically bracketing content-specific knowledge…

    Comment by N. Pepperell — 19 January 2011 @ 3:26 am

  2. “[E]ducational practices associated with academic rigor improved student performance, while collegiate experiences associated with social engagement did not,”

    This is why we must choose our children’s social activities for them and stick them in a coal cellar if they defy us.

    Comment by NB — 19 January 2011 @ 6:30 am

  3. Good points, NP. The dog I have in the fight is my kid, who starts university in the fall. Critiquing the summary of this study sounds like the sort of task that a student might encounter on the CLA, the instrument used in this study for measuring learning. I’d never heard of the CLA before, but here’s a brief description from their website:

    Unlike most standardized tests in postsecondary education, the CLA does not include any multiple-choice or true/false questions. Instead, the CLA program uses several types of “constructed response” tasks (i.e., students create their own answers like an essay test). In a typical administration of the CLA, students complete either a Performance Task or an Analytic Writing Task. Students participating in the Lumina Longitudinal Study completed both.

    In the Performance Task, students are instructed to draft a letter, memo, or similar document (e.g., to a supervisor, co-worker, or company) about some matter. They also are given a “library” of documents, some of which are more credible and relevant to the task than others, and some may include contradictory information. Students have 90 minutes to evaluate the
    information provided, synthesize and organize that evidence, draw conclusions, and create a cogent response.

    The Analytic Writing Task consists of two sections. First, students are allotted 45 minutes for the Make-an-Argument task in which they present their perspective on an issue like “Government funding would be better spent on preventing crime than dealing with criminals after the fact.” Next, the Critique-an-Argument task gives students 30 minutes to identify and describe logical flaws in an argument. Here is one example:

    “Butter has now been replaced by margarine in Happy Pancake House restaurants throughout the southwestern United States. Only about 2 percent of customers have complained, indicating that 98 people out of 100 are happy with the change. Furthermore, many servers have reported that a number of customers who still ask for butter do not complain when they are given margarine instead. Clearly, either these customers cannot distinguish margarine from butter, or they use the term “butter” to refer to either butter or margarine. Thus, to avoid the expense of purchasing butter, the Happy Pancake House should extend this cost-saving change to its restaurants in the southeast and northeast as well.”

    A set of predefined criteria are used for scoring students’ responses, so there’s a degree of judgment involved. I haven’t read enough of the psychometrics to see what sort of inter-rater reliability there is between graders. I believe the study used a cross-sectional sample scattered across the 4 years of university, rather than a longitudinal study following the same individuals over time. The CLA website includes the summary of a study showing that cross-sectional differences by year in school and longitudinal changes are quite similar.

    Why pages of assigned reading and writing as a measure of rigor? I’m sure it’s in part because they’re easy to quantify. It’s possible the researchers collected data more relevant to math and science as well — numbers of homework problems and labs assigned, etc. — but that these measures didn’t correlate with CLA test results. Number of hours spent studying by oneself also correlated significantly with CLA improvement, and I’d say that measure does apply to all areas of study. The summary reports that liberal arts students performed the best while business, education,social work, and communication students did the worst. Presumably math and science students came out somewhere in the middle. I agree that writing skills aren’t as important in math/science as in liberal arts, but critical thinking and complex reasoning are. Presumably the study breaks down results in more detail.

    Of course this study explores only one of myriad possible ways of evaluating educational effectiveness. In prior posts I’ve looked at some findings from standardized achievement tests, which do include measures of content acquisition, but many educators dismiss such findings as institution- and discipline-dependent. These same educators contend that critical thinking, complex reasoning, and clear expository writing are the true indicators of teaching effectiveness and educational attainment across the board, but that indicators are too qualitative to be measured. So here’s a study that measures them.

    When it comes to students’ level of improvement through the university years, I’m not sure what would satisfy either the researchers or the readers of this study. The researchers interpret their own findings as documenting rather desultory effects of education, but half a standard deviation improvement is pretty strong compared with many other studies of educational effectiveness. Maybe in today’s precarious economic climate, where deep cuts in tax support to higher education is a widely supported policy move, the publishers believed they could sell more books if they spun the findings toward demonstrated failure rather than success.

    Comment by ktismatics — 19 January 2011 @ 7:37 am

  4. “This is why we must choose our children’s social activities for them and stick them in a coal cellar if they defy us.”

    Right you are, NB. The release of this study does dovetail well with the Chinese mother story from the preceding post. What we need now is a study evaluating changes in university students’ psychosocial well-being over time. These measurement tools exist, and maybe the studies have been done looking at whether, say, more assigned reading/writing correlates with either lower or higher anxiety and depression. It’s certainly the case that I remember more about the social aspects of college than I do about the courses I took. Still, I disliked study groups and fraternities/sororities, so any research that casts aspersions on these things I support. I wonder what correlation has been documented between changes in critical reasoning and the number of cans of beer consumed per week?

    Comment by ktismatics — 19 January 2011 @ 7:44 am

    • I always found beer improved my critical reasoning. Although, I admit that there’s a pretty short window of clarity before it becomes fevered, then maudlin, then incomprehensible.

      It’s important to view university as much a social experience as an educational one. We don’t really have fraternities or sororities over here, we just go to the bar. The little I know about them has been gleaned from things like Animal House etc.

      Comment by NB — 19 January 2011 @ 9:13 am

    • I’ve often wished that I’d taken a year off before starting university. My identity issues, and especially my resistance to “White Swan” excellence, resulted in my spending relatively too much time socializing and too little studying. It costs a lot of money to go to university these days, whereas socializing is pretty cheap — just the cost of beer. Hopefully our kid will get her money’s worth on both fronts.

      Comment by ktismatics — 19 January 2011 @ 12:21 pm

      • “My identity issues, and especially my resistance to “White Swan” excellence, resulted in my spending relatively too much time socializing and too little studying.”

        I hear you. Same here. I think all kids should take a year off before university too. Maybe in work placements. That could really focus the mind; i.e. make someone realise that accountancy isn’t all that exciting etc. I was way too green. For the whole first year I did hardly any work and almost got chucked out. There was quite a bit of partying, but it was also due to the fact that I didn’t really know whether I wanted to be at university at all.

        Black Swan is released here tomorrow. The trailer makes it look terrible, but I’ve heard it’s nicely freaky. For some reason, I never seen much Aaronovsky stuff. I saw Pi when it came out and didn’t think too much of it. I think I should see it again. Friends of mine have said that I should see Requiem for a Dream. I love The Red Shoes, naturally!

        Comment by NB — 20 January 2011 @ 3:19 am

      • I dropped out of college after my junior year, expecting never to return. My plan was to become a vagabond novelist. I worked in a warehouse for awhile, traveled awhile, then returned to school after one year off. Maybe I’d have been better off with the vagabond writer plan, although in my case it eventually came back, the return of the repressed, the zombie back from the dead.

        Our kid, in contrast, seems quite motivated to go on with her schooling. No doubt part of it is the pull of her high school colleagues, but I think she really likes scholarship, not necessarily for career advancement but for its own sake. I’ve occasionally pushed the “gap year” idea on her, but she’s bound and determined to succeed academically — the little fool!

        Comment by ktismatics — 20 January 2011 @ 8:23 am

  5. Regarding whether results of this study show good or poor outcomes of university education, a previously wrote a post about a study of teacher effectiveness, based on a teacher training and evaluation program called the NBPTS. That study found that differences in learning outcomes attributable to teacher differences amounted to between 0.1 and 0.2 standard deviations. What did the researcher conclude from these findings?

    “[T]his is the first large-scale study that appears to confirm the NBPTS assessment process is effectively identifying those teachers who contribute to relatively larger student learning gains. This finding is important both because it provides some indication of a positive return on the investment in NBPTS, and on a more fundamental level, it demonstrates that it is actually possible to identify teacher effectiveness through NBPTS-type assessments.”

    In contrast, the study of university outcomes in the present post shows much stronger outcome (0.5 standard deviation versus 0.1-0.2 s.d.), but here the researchers conclude that universities are doing a poor job. Why? Claiming that one can distinguish effective from ineffective teachers gives school boards and governments justification for eliminating highly-paid teachers who score relatively poorly on their assessment tool. In contrast, claiming that higher education sucks adds support for a kind of “shock doctrine” call for dismantling the public educational system and rebuilding it on some other grounds — perhaps privatizing it.

    If one lets the numbers speak for themselves, the conclusions are reversed: university education in general has a pretty strong impact on student learning, whereas differences between individual teachers have a weak impact.

    Comment by ktismatics — 19 January 2011 @ 8:53 am

  6. Yeah – this is very nice:

    In contrast, the study of university outcomes in the present post shows much stronger outcome (0.5 standard deviation versus 0.1-0.2 s.d.), but here the researchers conclude that universities are doing a poor job. Why? Claiming that one can distinguish effective from ineffective teachers gives school boards and governments justification for eliminating highly-paid teachers who score relatively poorly on their assessment tool. In contrast, claiming that higher education sucks adds support for a kind of “shock doctrine” call for dismantling the public educational system and rebuilding it on some other grounds — perhaps privatizing it.

    If one lets the numbers speak for themselves, the conclusions are reversed: university education in general has a pretty strong impact on student learning, whereas differences between individual teachers have a weak impact.

    My curiosity about the CLA is whether it might be inadvertantly testing a means, rather than an end. In other words, students who have more practice with longer reading and writing based assessments, are going to find it easier, all other things being equal, to demonstrate whatever critical thinking and complex reasoning skills they have, in a task that only allows critical thinking and complex reasoning skills to be demonstrated via longer reading and writing tasks… Differentiating the measurement instrument, from what it purports to measure, is complicated at the best of times. I’m just curious how they’ve determined the validity of the measurement instrument… Presumably those humanities students that seem to be performing (relatively) better on this instrument would be a bit less able to demonstrate their critical thinking/complex reasoning skills if asked to spot the flaws, say, in a mathematical proof, or devise the most efficient/elegant/etc equation when presented with a specific problem…

    I’m involved this coming term in what might be a similarly confounded research exercise at my own university :-) They want to conduct randomised trials to determine the most effective means of teaching critical thinking skills for our undergraduates. I’ve already had a long discussion about the limits of pure “randomisation” (I have a team of tutorial staff of varying levels of experience and skill – if you literally randomly select tutorials, the confounds are just jaw dropping… so we’ve settled on, essentially, using tutors as their own controls – asking tutors to trial specific teaching techniques in one tutorial, and not others… This is also problematic on various grounds – but perhaps grounds more likely to cancel one another out in some sort of noise… Maybe…)

    But the bigger problems are how to assess critical thinking skills before and after, and, even more, how to assess this in a way that doesn’t mean that what we’re really capturing is students’ familiarity with the test rubrics, rather than the underlying skill we’re trying to capture.

    To increase the humour value, the class of mine they’ve chosen to run this in, happens to be an undergraduate research methods course… ;-P I’m resisting (just barely – but see how long my resolve holds out during the term) the temptation to teach this research process itself, and see how well my students can tear it apart… ;-P But I figure it’s already got enough natural problems with my anarchistic additions…

    Comment by N. Pepperell — 19 January 2011 @ 9:18 am

  7. “testing a means, rather than an end”

    I presume that’s the testers’ intent: teach students how to fish instead of just giving them fishes and so on. Presumably it’s what employers and grad schools want from university grads: general intellectual skills that can be applied to a wide array of subjects. I agree that critiquing an algebraic proof requires specialized knowledge and skill, but presumably these are built on more general abilities in logic and critique that could be tapped by something like this CLA (which I’ve never seen and had never heard of before reading this article). Surely this is the case in your methods course: understanding and wielding the tools of the trade are learned skills that require certain basic aptitudes, without which the tool-using is clumsy and/or mechanical. So too there’s specialized knowledge required to interpret standard deviation differences between groups, but it’s built on a more general ability to discern what’s significant, what’s important, what’s a large effect, what’s the difference between the number and the researcher’s interpretive claims about the number. So though I haven’t seen the CLA I can picture it being designed to do more or less what it’s intended to do.

    How did they validate it? I don’t know. I saw in looking through one of the studies on the website that CLA correlates not very highly with either SAT scores or university GPA. The researchers interpret this lack of correlation as evidence that the CLA measures something distinct from these other metrics. This sounds to me like an argument from the null, from non-significance, which I’m sure you’d agree from a general methodological POV is tempting but spurious. Maybe they’ve got better results that I haven’t seen.

    Using tutors as their own controls does sound perilous, doesn’t it? Tutors who prefer teaching method A to method B are more likely to give more effort and enthusiasm when practicing method A, which should skew the results. But of course I don’t need to tell you. Maybe if half the tutors are given some sort of reason to believe that method A is best while the other half are told of the superiority of method B? Empiricists can build long and successful careers by systematically tweaking these sorts of parameters.

    Even if you consciously resist the temptation, will your unconscious desire lead you into temptation anyway? Oh wait, now we’ve changed disciplines…

    Comment by ktismatics — 19 January 2011 @ 12:06 pm

  8. Hey ktistimatics – apologies – have stayed up too late, and am not being very clear :-) By “testing a means”, I didn’t intend to pick out “the means of critical thinking”, but rather to say that they risk testing a particular form of testing, instead of the skill of critical thinking itself. As in, I’m not unsympathetic with the notion that they might want to measure critical thinking (although I’m agnostic about whether this is something that can be tested in abstraction from any specific content – and I’m downright suspicious that some content is likely to be smuggled in, whether the researchers want it to be or not…)

    But yes, on the study happening here… it’s just hugely problematic… Tutors as their own controls is actually an improvement from where we started, but still… The option of giving half the tutors reason to be more enthusiastic about one method than another… well, that highlights another problem. There is only one method being “tested” – what that method is being contrasted to, is… whatever other methods the tutors happen to be using. And that’s not likely to be uniform across tutorials. (Although it will be slightly more likely to be uniform in my course, because of how I’ve handled the course design… it’s a hideously difficult course to teach, for various reasons, and so I’ve done more than normal to standardise the teaching… But my course is only one of many courses in which this “experiment” is playing out…)

    To add to the general problem, I focus already on the skills they’re interested in improving, and I use a lot of the techniques they are trying to test in the broader set of trials. In most courses, I gather they will be trailing and testing several different techniques in different tutorials – this won’t work in my course because the techniques intended to be introduced for the study, are already in general use and have been for some time. They did try to ask if I would remove them… ;-P I declined… It’s been a nightmare trying to work out reasonably reliable teaching techniques that can be used effectively by many different tutors – and the current course recipe seems to be finally more or less achieving that goal…

    But I’m happy to give the one untried technique (which involves the use of a particular software package for concept mapping) a go, because it emulates something we already do manually, just adding an electronic tool. Of course, I’ve explained to them that this means that, effectively, we’re testing the software, rather than the underlying skill… After some deliberation, they’ve decided they’re cool with that – but I’m very curious how they’ll report any results…

    I’ve also asked about the ethics of the study – no response yet there… ;-P But given how I push the students on this issue, we’re gonna be in trouble if they try to skirt around it…

    But sorry – shouldn’t let my personal bemusement with the “controlled trial” happening here, intrude so much into your thread :-)

    On the CLA not correlating (aside from the issue of the other alternative hypotheses that might support), how would it correlate with something like, say, the GRE logic test? Or another test attempting to capture more abstract reasoning skills – perhaps via manipulations of abstract symbol systems (as opposed to the contextual symbolism of regular language)? My point about mathematical proofs wasn’t intended so much to provide an example that required specialist knowledge, as to suggest, from a different direction, that a test that, by the by, requires a certain kind of verbal expression might actually be – in spite of its intentions – testing a particular specialist knowledge (a knowledge demonstrated by the “liberal arts” students who had the right kind of specialist training), rather than testing general reasoning ability.

    But I’m not familiar with the CLA, other than what I’ve seen here, so more curious (and, I suppose, reflexively sceptical), than trying to make any sort of strong critique…

    Comment by N. Pepperell — 19 January 2011 @ 12:48 pm

  9. “they risk testing a particular form of testing, instead of the skill of critical thinking itself”

    I see. Your first go was probably clear, NP; I was exercising the general skill called “changing the question into one I can write about for 200 words.” This brings back the question of validation: is there any sort of convergence on the constructs the instrument purports to measure? If I remember maybe I’ll try to find this book to see what the authors have to say about it all. Suffice it for now that both the authors of the study and the journalists reporting about the study are overgeneralizing from the findings, giving them a more pessimistic spin than seems warranted by the sparse results reported so far.

    I wondered reading the write-up whether the researchers had some vested interest in the CLA: consultants perhaps, or maybe CLA paid for the research. I also wonder about the software package you’re testing: does the university have some financial commitment at stake? You mentioned ethics…

    Comment by ktismatics — 19 January 2011 @ 1:27 pm

  10. lol – to my knowledge, no – but my knowledge is fairly limited… The other things they are interested in trialling are not software-dependent, but relate to teaching techniques or assessment design. All low tech and “free” to implement, if one feels that staff time is free…

    But as for the ethics of asking students’ permission before involving them in research – and giving them a meaningful chance to opt out, when the trials are being done tutorial by tutorial… That I’m less certain about… (I suppose students could meaningfully opt out by refusing to participate in the pre- and post-skills testing process – suffering through the alternative instruction/instructional placebo is probably not a necessary problem, as long as we’re not collecting “data” from them…)

    Comment by N. Pepperell — 19 January 2011 @ 2:48 pm

  11. I don’t know if you’re still hanging around, NP, but your school’s research project triggered a couple of other thoughts while I was on my morning walk.

    In all likelihood the study cited in this post didn’t look at variations in teaching methods, with the exception of the amount of work assigned. The researchers found that profs who taught courses with large enrollments assigned shorter and fewer writing tasks, largely because they didn’t have time to mark the papers. The study also found greater improvements in the 3rd and 4th years of university than in 1st and 2nd years. Advanced courses have fewer students, which makes it less burdensome for the profs to crank up the writing requirements. One implication: get rid of those big intro courses, or at least assign multiple teachers or grad students to these classes for paper-marking duties.

    I mentioned the MBPTS study of differential teaching effectiveness. This project involved an evaluation of an intensive — and expensive — teacher retraining program. The MBPTS program is based on “best practices” observed in teachers whose students score well on measures of reasoning, analysis, and so on. So this study compared student outcomes for teachers who passed the MBPTS program versus those who hadn’t undergone the retraining. As I described, the differences weren’t dramatic. I suspect that the retrained teachers believed that they had become better teachers: enrollment in MBPTS was voluntary, required a lot of extra work, and got them a raise in pay.

    I think it’s likely that this heightened belief in teaching efficacy in the experimental-group teachers might have accounted for all of the differences in student learning outcomes. The importance of belief has been perhaps the most consistent source of outcome differences in psychotherapy research: belief in method trumps practice of method. The implication is that the emotional sense of engagement and passion is contagious: it draws students/clients into the process, juices them up, makes them want what the teacher/analyst wants for them.

    Comment by ktismatics — 20 January 2011 @ 8:18 am

  12. Some years back, I used to consult for educational programs, and this was pretty much my sense:

    I think it’s likely that this heightened belief in teaching efficacy in the experimental-group teachers might have accounted for all of the differences in student learning outcomes.

    People would feel very validated by improvements in academic performance of their students once they implemented this or that reform program, but, consulting, it was sort of easy to see that programs could implement widely divergent changes, and see similar results. There were some things that seemed substantively to help, but most improvements seemed due to a sort of Hawthorn effect, or the effect of conviction – or just the “effect” caused by many not terribly good teaching staff tending to depart when changes were initiated that required more effort…

    I say this not from a cynical perspective – I’ll take improvements, whatever the source – their material impact on the students wasn’t invalid, even if the people responsible for implemented changes didn’t understand what I felt was likely to be producing the changes… I only minded when advocates would be blinded to these possible alternative explanations for their success – which both meant they could go missionary in a coercive way, assuming their methods would have similar effects when imposed from the outside, and also that they tended to get blindsided by the plateau that would eventually happen, as reforms settled in, required less concerted attention, became part of the mundane everyday experience and thus generated less excitement, and, in some cases, as results regressed toward the mean…

    Just to but in to the other conversation :-) On gap years: as an instructor, I prefer it when students have taken gap years (or gap periods – often more than one year is even better). They are more likely to know why they’re at uni, how university relates to other social spaces they might have to occupy (or want to avoid…) later, etc.

    I’ve just for the first time handled student selection in Australia, and here there are also arguably advantages in the selection process for students who have taken a year off. These are probably locally idiosyncratic – my impression has generally been that the US higher education system is less flexible and less tolerant of students who don’t go more or less straight through from high school, although this will obviously depend on where they’re applying… But here, students who are not coming straight from high school may have (depending on their field) an additional selection round in which their applications can be considered, in addition to the normal selection round where they compete with school leavers. The application process is also different – they can tell us more about themselves, and we can take this additional information into account (for better or worse, depending on the applicant). But universities outside Australia may not differentiate anywhere near as strongly between school leavers and applicants who have taken a bit of time off…

    Comment by N. Pepperell — 20 January 2011 @ 9:56 am

  13. …and in a related study, researchers found that test-taking resulted in 50% better memory retrieval a week later than did repeated studying or concept mapping.

    Comment by ktismatics — 20 January 2011 @ 4:46 pm


RSS feed for comments on this post. TrackBack URI

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Create a free website or blog at WordPress.com.

%d bloggers like this: