At the EDUCAUSE Annual Conference, held Oct. 31–Nov. 3 in Philadelphia, more than 8,000 attendees gathered to discuss the latest IT trends and solutions affecting stakeholders and students at higher education institutions. Visit the conference website and its Twitter feed for more information.
Learning About Learning Data
In The Wicked Problem of Learning Data Privacy, representatives from the University of California (UC) shared the importance of opening a dialogue on campus about privacy and learning data—data generated by students, teachers, and staffers that helps support student success. It is often personally identifiable information, such as names and courses taken, as well as quiz responses, discussion question responses, etc. Analysis of this data can inform institutional decisions, support educational research, and positively impact student outcomes.
Jenn Stringer (UC–Berkeley) said the conversation about learning data privacy has to be driven by faculty members. There needs to be a consensus about who has access—i.e., how much should advisors, the registrar, and others on campus know about student learning data?
Attendees split into groups to brainstorm the challenges that can arise when adopting online tools for learning data analytics. Common answers included the decision about who owns the data, how it integrates with a campus learning management system (LMS), the cost of maintaining the data and tools over time, and the decision about the type of access to data (e.g., cloud-based or local).
To illustrate the importance of paying attention to privacy concerns when dealing with data, Stringer described a bad experience Berkeley had with one learning analytics vendor that used an opt-out feature for gathering data. The feature was unclearly worded (students had to uncheck a box labeled “get career opportunities relevant to me” if they didn’t want their data to be sold to outside companies). As Stringer noted, students would leave the box checked, because “career opportunities” sounds positive—but then they were bombarded by potential employers. Stringer complained to the vendor, and now that feature is opt-in and more clearly states that by checking the box, students’ data will be shared.
Jim Williamson (UCLA) explained that he and his colleagues have labeled learning data privacy a “wicked problem”—a problem with no agreed-upon, unitary aims; no definitive formulation; and no neat ending. Privacy issues surrounding learning data are complex, he said. There’s no one-size-fits-all solution.
Attendees once again split into groups, this time to compile a list of reasons why learning data privacy is a wicked problem. For example, there are multiple rulesets about data collection at different campuses in a university system, there are different solutions provided by different vendors, and the more data that’s collected, the more complex the ownership issues get.
Mary-Ellen Kreher (the UC Office of the President’s Innovative Learning Technology Initiative) said that UC’s approach is to always have a say in how vendors collect data. It’s a central part of negotiations with them. The university also insists on using interoperable standards for transferring the data, so it’s more accessible to various stakeholders. Additionally, it prioritizes transparency—students can opt out of anyone using their data if they’d like to. These solutions can’t solve a wicked problem, but they can certainly go a long way toward defying the gravity of one.
Data at the Library
Closing the Data Gap: Integrating Library Data Into Institutional Learning Analytics was another session focused on learning data, but this time geared to the library’s role in using it. Megan Oakleaf (Syracuse University) said that all types of systems at a university feed into the process of collecting learning analytics, except the library—and that needs to change. Libraries look at their interactions with students to make connections, and now they need to make those connections with institutional data.
She added that when it comes to next-gen learning (i.e., beyond the LMS), libraries should say, “Help us help you.” Libraries already instruct and provide resources, so they can use their expertise to work with university stakeholders on what data should be collected or omitted. For example, libraries gather data on the types of resources that have been checked out, but they don’t share that Student X checked out The Catcher in the Rye. They would re-frame it as: One student checked out a fiction book.
Another way libraries can integrate their data is to tie it to student outcomes. Shane Nackerud (University of Minnesota) shared his university’s study of how library usage relates to students’ success at school. He looked at metrics such as reference data, courses, workstation data, and e-resource use in 2011 and partnered with the Office of Institutional Research on his campus to continue tracking this data. They were able to use the data to inform attempts to increase library traffic: In fall 2011, nearly 80% of students used the library, and in fall 2016, nearly 90% used it.
Putting IT in Students’ Hands
In Creating an IT Innovation Lab to Promote Student Career-Readiness in STEM Fields, staffers from New York University (NYU) IT Operations Technology Services and Support shared how they began taking on student workers as a way to prepare them for IT jobs. At their IT Innovation Lab, Meenakshi Baker and her team give students opportunities to learn skills that will be useful in the “real world”: problem-solving, teamwork, etc. They even expanded the lab to the local community by talking to high school students throughout the five boroughs of New York about the skills they’ll need to succeed.
Software engineer David Garwin discussed the aspects of NYU’s lab that others could emulate: the importance of communication between developers and students working at the lab (they use Slack, Trello, and Microsoft TFS to stay in touch, but they’re also together in a physical space), developers keeping up-to-date with the computer sciences industry so students know what potential employers will be looking for when they graduate, and students leading their own teams when developing new products so they can experience the process from inception to deployment (this gives them a 360-degree view of the software development environment).
Gregory Fisher, another software engineer, said that the students are held to the same standards as the lab’s full-time employees. They are given detailed feedback on coding that doesn’t make it into production so they can learn from their mistakes. And the mentoring aspect of the student-developer relationship means that many problems will be caught early and be fixable. The lab documents every part of the software development process, including the logic of why changes were made to a product, as a way to maintain institutional knowledge and help developers learn from past efforts.
Software developer James Walsh said that students are kept motivated by metrics tracking their progress. The lab team conducts informal check-ins and encourages side projects to keep students engaged, along with traditional check-ins at students’ hiring, completion of their first production task, and graduation and when they get a job (as well as monthly while they are working at the lab). This allows the developers to make improvements to help students work more efficiently and find out what employers are looking for in potential hires.
At one of the general sessions, behavioral economics expert Katherine Milkman shared her research on how organizations can get better outcomes by paying attention to how they present information. Decision Biases: Improving the Quality of Our Everyday Decisions opened with two cautions from Milkman in the form of exercises. First, attendees viewed a video of a basketball game and were asked to count how many times players in white shirts passed the ball. While the game goes on around her, a woman holding an umbrella walks across the court. Of course, most viewers were concentrating too hard on the white-shirted players to see her. Milkman said that being too focused on one aspect of a decision could be detrimental to the decision-making process. Second, Milkman had attendees write down their best guess for statistics such as the profits of Walmart and the distance from the Earth to the moon. They could use ranges—the lowest number they thought the answer could be and the highest number they thought it could be. Most attendees wrote correct ranges for only four out of the 10 statistics, which Milkman said was typical. This illustrates the danger of being overconfident—thinking you know too much information could be a limitation on making a decision. Understanding the systematic way we make decisions helps us get better at doing it, Milkman said.
She then turned to “choice architecture”—the way the environment shapes our choices. For example, in a cafeteria, the first item in the line of food choices will end up on the most plates. So in schools, staffers could place the healthy options at the front of the line to encourage better eating habits. Organizations can consciously invoke positive choice architecture to influence behavior. Milkman showed a Volkswagen-sponsored video that asked whether people using an escalator could be convinced to use the set of stairs right next to it. The stairs were transformed into a working piano, with each step representing one key. The video showed people flocking to the stairs to make music with their feet, while the escalator stood empty.
She noted that choice architects should expect errors from people using their product or system (e.g., create cards that can be inserted into a machine and read properly even if someone puts one in upside-down) and make sure their product or system provides feedback (e.g., design digital cameras to click so users know a photo was taken, although they don’t need to make that sound in order to work). There are seven principles of choice architecture that Milkman advocated for. She shared research that underlined each example, highlighting how organizations can tweak language or marketing to make it more likely a customer’s decision will work in their favor. The principles are:
- Set helpful defaults (e.g., make organ donation an opt-out feature instead of an opt-in one)
- Prompt people to plan (e.g., it’s more likely you’ll go somewhere or do something if you’re asked what time and how you’ll accomplish it—even on a questionnaire, you’re accountable to yourself)
- Leverage social norms (e.g., if people seem happy, you’ll be more likely to join in with whatever they’re doing)
- Create accountability (e.g., make people feel like they are being observed or have to certify that something on a form is true—if that language is at the top of a questionnaire, people are more likely to be truthful)
- Capitalize on fresh starts (e.g., a new year or month or the start of a new job can be a good time to reach out to potential customers, since they’re looking to make changes in their lives after “milestones” such as birthdays or holidays)
- Allow pre-commitment (e.g., allow people to get the same rate if they sign up for something now or in the future)
- Rely on loss framing (e.g., money has to be returned if a benchmark is not met, instead of receiving a bonus if it is met)
EDUCAUSE 2018 will be held Oct. 30–Nov. 2 in Denver.