Texting and using Facebook while studying related to lower GPAs

Posted by reyjunco on September 19, 2011 in Research |

This week, Shelia Cotten and I will present a paper on multitasking at the Oxford Internet Institute’s A Decade in Internet Time Symposium on the Dynamics of the Internet and Society.

We have posted the draft of the paper here. Please note that the paper is in draft form and it has yet to be peer reviewed. Therefore, we’d love it if those of you reading this would act as peer reviewers and provide us feedback about the paper. We’ll incorporate any feedback before we submit the paper for publication in an academic journal. We are particular interested in your thoughts about our interpretation of the results. Please feel free to share your feedback in the comments below or to email me directly at rey (period) junco at gmail. We’ll acknowledge any feedback we use in the paper.

Using hierarchical linear regression (N = 1,839) with gender, ethnicity, parental education level, high school GPA, and Internet skills as control variables, we found that frequency of sending text messages and using Facebook while doing schoolwork were negatively related to overall GPA. However, frequency of using email while doing schoolwork was positively related to GPA.

The finding that using Facebook and texting while doing schoolwork was negatively related to GPA was congruent with previous research on multitasking however, the finding that using email while doing schoolwork wasn’t. We theorize that the reason for this discrepancy is because of how the technologies are used– students use Facebook and texting socially with their peers while they use email to communicate with faculty and university staff. Therefore, we propose that social activities will lead to more negative outcomes while academic activities will lead to more positive ones.

As with my recent study of Facebook use and engagement, these data are cross sectional and correlational and while it’s intriguing to think that multitasking causes students to have lower grades, it is equally likely that students who have lower GPAs happen to spend more time multitasking. There is more than likely an extraneous causal variable related to both multitasking and student academic achievement that we have yet to measure. We’ll examine this more in future research.

Tags: , , , , , , ,

  • I’m jealous that you’re going to this conference! The conference looks really cool, the people at the institute are as nice as they are smart, and it’s a fantastic location.  I thoroughly enjoyed the two weeks I spent at the OII this summer and I’m sure that you’ll enjoy your time there, too!

    Quick observations about your paper:

    1. Cite the ECAR undergrad study with caution. Their response rates are very low and I strongly suspect there is response bias skewing many of their numbers given the (a) self-selection of respondents and (b) web-based administration mode. The Core Data Service may serve as a useful point of triangulation (2009 institutional delegates reported between 80 and 90 percent computer ownership).

    2. I want more information about your survey instrument, particularly the validity and reliability of the questions you created. The questions you asked are straight-forward and there probably aren’t any problems but that’s an assumption and I’d like more information on how and why they were constructed.  Several of the questions, especially the “How many?” questions, probably lack some validity if you really think people can answer them accurately.  But they’re okay to use in a comparative sense because they are probably reliable as you show with some of your correlations between “yesterday” and “average” usage (which, by the way, is a clever trick I’m stealing for my toolset). In fact, you describe this as one of your limitations but I think you overstate it as a limitation since your approach is fine if you’re interested in making relative and not absolute comparisons.

    3. Your demographic questions seem to pose some limitations. In particular, conflating race and ethnicity may be problematic given the continuing increase in Latinos in U.S. colleges and the broader U.S. population. It’s almost certainly washed out in your study given the overwhelming whiteness of your respondents and population but could be an issue for additional studies.

    4. I like that you linked institutional data to your survey instead of completely relying on self-reported data. That’s a nice touch and if you had a larger data set it would have been nice to explore adding additional variables to the model, especially test scores and major.

    5. Most importantly, I was surprised that you didn’t discuss the R squared of this study, especially the rather small increase between your second, third, and fourth blocks. The small increase seems to weaken the impact of your study and its practical significance.

  • Thanks Kevin — really helpful feedback. You’re absolutely right that the change in R^2 deserves more time in the discussion. In a new paper that’s about to be published in Computers in Human Behavior, I go on and on about the change in R^2. I wouldn’t say the small changes “weakens” the impact of the study as much as it weakens the impact of the results– or put another way– let’s not lose too much sleep over these findings. 

    You may have also detected something from my tone in the limitations– I have been spending a lot of time thinking about the validity of asking students to estimate tech use and coming up with better ways of assessing time spent online and in various activities (more to come on that in the near future).

  • There’s a lot to be said about the validity of tech use questions! I’ve thought about this a lot, too, and it’s shaped much of what I have (and have not done) with technology here at NSSE. We’re still trying to put together technology questions for the next version of the survey (and we’re working with EDUCAUSE to do so – http://www.educause.edu/E2011/Program/UPD15) and this is playing a large role in that process. In particular, it keeps preventing me from suggesting particular lines of questioning that would be interesting but possibly impossible with a self-reported survey.

    I think we may be approaching the limit of self-reported data in some areas of technology use. In particular, I’m stymied in my own thinking by the convergence of multitasking and ubiquitous access.  I think you mention something in your paper about the distinction between “using” a service for a period of time and “logging on” to a service and that distinction is something I’ve thought about a lot, too.  Maybe “use” is just too broad a concept that has lost meaning in some of these contexts and we need to start focusing on particular uses and behaviors.  But when we drill down to specific details we quickly run into limitations of memory and understanding if we’re relying on self-reported data. What to do, what to do?

  • Absolutely, Kevin. In future surveys, I’ll be distinguishing between “logging on” and “using;” however, as you said– drilling to down to that level may not yield very good information as it’s not something humans encode. For example, I bet the average driver can tell me how much time he/she spends driving, but I bet they can’t tell me how much time they spend working the radio/sound system when they drive. 

  • Any distractions like social media will have an affect on study.  By its nature topics, interests, input, feedback wanders from the main point so it is a distraction

Creative Commons License
Unless otherwise specified, all content on this blog is licensed under a
Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.

This site is using the Junco Child-Theme, v2.0.2, on top of
the Parent-Theme Desk Mess Mirrored, v2.0.4, from BuyNowShop.com.