Junco, R. (2012). The relationship between frequency of Facebook use, participation in Facebook activities, and student engagement. Computers & Education, 58(1), 162-171.
Facebook frequency – I use two different ways to measure frequency of use: 1) Average time spent on Facebook per day and 2) Number of times Facebook is checked per day. Note that if you are relating these to outcome measures, you will probably get different results for time and checking (there is more detail about this in this paper and this one). My reason for using a continuous measure of time spent on Facebook is simple. Using categorical measures (i.e., 1-2 hours, 3-4 hours, etc.) presupposes an underlying distribution in your data that may not actually exist. Plus, you can always convert a continuous variable to a categorical, but not vice-versa. I recognize that using such fine-grained options could be problematic; however, survey methods are the most efficient in collecting these data (stay tuned for one of my future projects that seeks to remedy the problems inherent in employing surveys to collect usage data). You might have noticed that the “time spent on” question is worded in a way to include more stems than just Facebook– I used a few additional stems (i.e., searching for information, email, etc.). Feel free to add your own.
Facebook activities - Because the types of Facebook activities change with the addition or deletion of features, I used a novel method to come up with these 14 items. I posted a public status update stating: ‘‘I need your help for my next research project. What are the things you do on Facebook?’’ The items submitted by 39 of my Facebook friends were collated and compiled into a non-overlapping list of 14 items. These 14 items were shared with two separate groups of undergraduate students for input, revised and posted on Facebook for further comments. All of the items from the original list were kept, and most of them were edited for clarity and relevance.
You can download a pdf of the questions here. If you end up using these questions in your own research, please send me an email and let me know. You may also be interested in my colleague, Nicole Ellison’s Facebook Intensity Scale.
For those of you who are regular readers of my blog, this post is very different than what you are accustomed to seeing here. While I usually keep my posts professional (I’ve often joked that writing for my blog is like writing mini journal articles), this post is incredibly personal. I’ve been thinking about writing it for some time, but quite frankly, haven’t had the guts to do it because it’s “not the way we do things” in academia. Interestingly enough, I tend to rail against the notion of doing things in certain ways just because that’s “how we’ve always done it.”
Last week, I was having a conversation with a close friend about my job search and she astutely noted that while I’m quite transparent online when it comes to many other areas of my career, I’ve kept my job search a secret. While her statement wasn’t the impetus for this post, it was certainly the reason I decided to write it sooner rather than later.
You see, I’ve been on the job market for a few years now. This is no surprise to the people who are close to me (as well as my department chairs). I am grateful for the support my institution has granted me over the years, but as I continue to specialize in my field, I am finding that my current position is no longer the right fit. While I’m considered faculty, 75% of my time is allocated to being a student affairs administrator at a non-research focused university. Although that’s been rewarding, as my career has evolved I have focused more on my research and with that has been a related desire to teach more and to mentor graduate students. Even though my research and writing time is limited by my administrative duties, I’ve been able to publish a great deal in top-tier journals. So, I’ve applied for many faculty positions in the last few years. I’d love to be in a department at a research institution where my colleagues are also conducting research in similar areas. I envision collaborating with other faculty members within my home department and across departments (as I’ve already done in some of my research projects). I also envision teaching graduate courses and advising, mentoring, and collaborating with graduate students to share my passion for teaching and research. Furthermore, a research institution would have the support structures in place for other research-related activities like obtaining grants and collecting large-scale data sets.
Now here comes the part that might come as a surprise—I have received rejection after rejection for these positions. Most of the time, these rejections come without ever getting an interview. In a few rare cases, I’ve actually gotten campus interviews to later learn that someone more junior or much more senior had been hired. Even more surprising is that when I have gotten an interview, I’ve been given positive feedback about how well I did. And herein lies the disconnect—while having had much success with my research, I have had no success at finding a new job. In the past year alone, I’ve applied to ten positions, and because of feedback from a few trusted colleagues I even expanded my search beyond student affairs, higher education, and counselor education programs.
I’ve considered many possible explanations, some of which include personality (It would be easy to understand if I was a jerk, but I’m not—people actually like working with me), institutional type (it’s not easy to go from a teaching institution to a research one), my current position as mainly an administrator, and the fact that I’m a full professor and that committees might think I want to come in at that rank. Of all of the possibilities I’ve considered, I’m beginning to consider what might be the likeliest explanation—the fact that I am the only person doing this type of research in education. As far as I know, there are no other educators examining the academic and psychosocial impact of social media. You would think this would be intriguing to a department or a search committee; however, it makes me an outlier. Most of you reading this blog “get it” when it comes to using social media in productive ways; however, I’m guessing that most of the people evaluating me on search committees don’t. My take is that my work is seen as more of a curiosity. I also wonder whether some may think that my work would lead to major shifts in academia and/or the loss of faculty jobs. But to be quite honest, those are only guesses. It’s clear that my research is different than what others are doing in my field, and that very well could be the explanation.
I realize that sharing this is risky especially since academics don’t talk publicly about their job search. But quite frankly, what’s the worst that could happen? It’s not like a potential employer is going to read this post and not hire me—that’s already been happening well before I wrote this. Besides, whether I like it or not (and to be clear, I don’t), the field of higher education may be trying to tell me something—and it may be time for me to look elsewhere and begin checking out opportunities in the private sector. If you happen to know of an organization looking for someone like me, please feel free to share this post and my cv with them.
If what I’ve written resonates with you, I’d love to hear from you. I’d also love to hear from you if you have some feedback to share or have a story to share about your experiences with the academic job search process. Please leave a comment and I promise I’ll reply to every one.
Over at Wired, Tim Carmody wrote a great piece about Apple’s latest foray into the education market – digital textbooks via the iBooks 2 app. Tim hits the nail on the head in his introduction (emphasis mine):
Engagement is a big word in education. It combines both objective participation and subjective emotion. It’s one of the few psychological terms in education that links students, teachers and content. So it’s not surprising that in promoting the iPad as a tool for education, Apple touted the device’s ability to engage students.
Because they’re so engaging: okay, let’s just drop the bull and say it, because they’re cool
Tim understands what Apple and most reporters don’t know or like to gloss over: That there is nothing engaging about iTextbooks in relation to the important interpersonal engagement that we’re striving for in order to increase student motivation, participation, and academic outcomes (here is a great article reviewing student engagement and related research).
There’s no doubt that iTextbooks are intrapersonally engaging or put another way, interactive. However, just because something is interactive does not mean that it is engaging. Although I’m not endorsing them either, at least Inkling has the “social learning network” feature that allows students and instructors to carry on a conversation about book content. Certainly, a step in the right engagement direction.
Some of my research on Facebook and Twitter illustrates the idea of “engaging” tech vs. actual engagement: using an “engaging” system like Facebook doesn’t predict much of the variance in real-world engagement; however, using it in certain ways does. Learning outcomes come about not because of the particular technology being used, but because of how that technology is used to support sound pedagogy. Certainly, some technologies will be better suited for certain activities than others (for instance, Twitter lends itself better to ongoing synchronous and asynchronous conversations than email).
Educators will often become enamored by new technologies and adopt them with the underlying assumption that technology in and of itself must be good for learning (for a great review, see the Outcomes section of this paper). We see this type of hype with almost every new educational technology tool that is released. Take for example iPad initiatives at a growing number of universities: there’s no data to show that having students adopt iPads leads to better learning outcomes. So why adopt them on such a widespread scale? The only reason that I can tell is because they are cool.
Now, there is nothing wrong with the “cool factor.” Doing traditional educational activities with a shiny new toy can improve student motivation, a phenomenon I liken to Jedi mind tricks (“these are not the boring lectures you are looking for”). Unfortunately, the effect of the cool factor is short-lived when it comes to promoting positive educational outcomes. When all students have iPads (which is presumptuous to assume- I’ll save my rant about how iTextbooks will widen digital inequalities for another time), reading iTextbooks will be just like any other boring non-engaging assignment that students have to complete in isolation. In other words, the coolness wears off and the interactivity becomes a routine part of the process.
Of course we’ll never know how well new technologies work unless we try them. But in addition to trying them, we must integrate them in educationally-purposeful ways and also assess how integrating them in these ways affects student outcomes in comparison to other tools (in the case of iTextbooks, reasonable comparisons would be regular textbooks, other forms of digital books, and interactive websites). Put another way, we can (and should) be excited about the possibility of how new technologies might enhance learning; however, we must be mindful of evaluating what works and more importantly, what doesn’t.
Recently, I’ve been thinking a lot about using badging systems to support student learning. There is great potential for using badging systems to add a game layer to learning in the traditional classroom, thereby increasing student engagement and learning outcomes.
Last year at SxSWi, Seth Priebatsch from SCVNGR gave a keynote (video) about adding a game layer on top of the world. If you don’t have much time, I’d recommend skipping to the part about game mechanics and engagement in education which starts at 10:30. Seth’s talk sparked a number of ideas for me, one of which grew to be our proposal Game Dynamics in the Classroom: Badges to Improve Student Engagement and Learning in Large Lecture Courses for the Digital Media + Learning Research Competition.
The gist (straight from our proposal):
The goal of this project is to create and evaluate a badging system for learning in order to increase college student academic engagement and improve class attendance and academic performance. We hypothesize that we can improve college student academic outcomes by combining Location Based Services (LBS) with a badging system employing game dynamics and integrating it in an educationally-relevant way in a large-lecture course at The University of Florida.
I’m really excited that we were able to partner with SCVNGR to develop a badging system for this project. If the project gets funded, we’ll use an experimental design to evaluate the impact of integrating our badging system and related game dynamics into large lecture courses. As outcomes, we’ll measure differences in student engagement, attendance, and academic performance between the experimental group and the control group. Here are our methods:
Before the semester begins, university students registered for a large-lecture introductory course will be randomly assigned to either a control section or an experimental section. Both the control and experimental sections will be taught by the same instructor and will follow the same schedule in the presentation of course material. Each section will contain at least 200 students for a total of 400 participants. The Institutional Review Board for the Protection of Human Subjects will approve research procedures
Students in the experimental section will use their Android or iOS devices to engage in academic challenges in order to earn badges. Students will check in to the classroom after indicated class sessions. Once they check in, they will be presented with a challenge that involves answering five questions about that day’s lecture, developed in consultation with the course instructor. Students will receive a point for each question they answer correctly. They will also receive points for checking in to the class location, posting pictures of their notes, and posting questions about the day’s lecture. Additionally, students will receive points towards badges by participating in relevant challenges outside of class, including “social check-ins” with a study group, visiting a professor/TA’s office or supplementary instruction session, or checking into the tutoring center.
When a student accumulates a pre-determined amount of points, she or he will receive a badge. Students may earn one badge for each week of the course. At the end of the semester, students will receive course extra credit based on the number of badges they have earned.
Students in the control section will have the opportunity to answer the same questions as the experimental group; however, these questions will be presented as quizzes using TurningTechnologies ResponseWare. ResponseWare allows students to submit answers by using either their mobile phones or their laptop computers. The quizzes will include the same content and be administered at the same time as the experimental group. Control group students will also be able to complete the other challenges, but they will be presented as extra credit opportunities accompanied by manual tracking methods and a traditional scoring rubric equivalent to the badge system.
You can read the entire proposal here. I’d love to hear your thoughts on this project. Please feel free to leave a comment below or on the proposal page at the DML site.
[Update 2/13/12: Our proposal was not selected for funding by DML; however, we are still looking for funding. Please read this post and share it through your networks.]