Wednesday, December 10, 2014

Paper Summary: A Crowd of Your Own


 A Crowd of Your Own: Crowdsourcing for On-Demand Personalization
HCOMP 2014 (Notable Paper)
 
A lot of my research explores personalization. Personalization is a way for computers to support people’s diverse interests and needs by providing content tailored to the individual. However, despite clear evidence that personalization can improve our information experiences, it remains very hard for computers to actually understand individual differences.

Some recent strides have been made in developing algorithmic approaches to personalization by trying to identify patterns that have seen before early in the sequence. For example, search engine are able to successfully personalize your search results by recognizing queries that you have issued before and helping you get back to the same results you found last time. The patterns identified do not necessarily need to be within your own data. Netflix and Amazon provide pretty good movie and product recommendations by identifying people with similar tastes to yours and recommending what those people have watched or bought to you.

However, the ability to identify and support unusual patterns requires access to a significant amount of data. For a search engine to provide personalized support for navigational queries, need have to have issued that query before. And for Netflix to recommend a movie to you, it must have data from other people with your particular idiosyncratic tastes who have already rated many the same movies that you have, as well as some that you have not seen. If Netflix wanted to recommend a home movie for you to watch from the collection of videos taken years ago by your father, it would find the task impossible because nobody's actually watched those movies since they were taken.

Fortunately, even though computers can't figure out what other people like, people are pretty good at it. Your sister, for example, could probably pick out a pretty reasonable subset of your dad's home movies for you to watch if she wanted to. For this reason, we often turn to our online social networks to ask for personal recommendations.

The people we turn to do not necessarily need to know us to be able to figure out what we like. If we are willing to pay for people's insights, crowdsourcing can provide an effective on-demand tool for personalization. We have explored two approaches to building personalized human computation systems.

Taste-Matching: In an approach similar to collaborative filtering, taste-matching identifies workers with similar taste to the requester, and then uses their taste to infer the requester’s taste.

Taste-Grokking: In an approach similar to what we currently do with friends, taste-grokking asks workers to explicitly predict what the requester will like after seeing examples of other things that the requester likes.


To receive a personalized rating for the kissing grandparents salt shaker shown above, the requester first needs to provide some ratings for other salt shakers. Using taste-matching, the system then collects ratings for the same salt shakers from crowd workers and matches the requester’s ratings with Worker I to predict the requester is very likely to enjoy the unrated salt shaker. Using taste-grokking, Worker III is shown the requester’s earlier ratings and offers an educated guess that the requester will like the last salt shaker.

Both taste-matching and taste-grokking can be used to personalize tasks that we don't yet know how to personalize algorithmically. The best approach to use depends on the particular task, as each approach has different benefits and drawbacks. Some differences include:
  • The number of workers required. To find a good match using taste-matching, opinions must first be solicited from a number of workers. In contrast, taste-grokking can be done by just one or two crowd workers.
  • The ability to reuse data. While taste-matching requires more workers, the data collected using that approach can also be re-used to make future recommendations to other people. Workers in a taste-grokking system provide recommendations specific to individual requesters, while taste-matching workers do not. Taste-matching is a good way to bootstrap systems that are likely to eventually have enough data to support automated approaches.
  • The quality of the data collected. Taste-grokking has a ground truth: successful taste-grokkers will be able to successfully guess what the requester likes. This makes it easy to identify incompetent workers. On the other hand, taste-matching asks for the worker's personal opinions. Because there are no "right" answers when it comes to opinions, workers may be tempted to provide quick-and-easy unthoughtful responses. 
  • The scrutability of the task. In the salt shaker example above, it is easy to use the data to separate people who prefer classic shakers from those who prefer kissing figurine shakers. But people's preferences aren't always so obvious. Taste-grokking salt shaker preferences is much harder when the requester instead says they really liked the first two salt shakers. We have observed that when there are a number of latent, unseen factors that are hard to capture in just a few examples, taste-matching outperforms taste-grokking.
  • Worker enjoyment. Workers report having more fun performing taste-grokking tasks. It can feel like a fun game to guess what other people are thinking, particularly when you get to learn whether you were right or wrong.
Using taste-matching and taste-grokking, we have successfully personalized a wide variety of tasks, ranging from image recommendation to text summarization to handwriting duplication. The crowd is great for tasks that people can do well but that we haven't fully figured out how to automate. Crowd-based personalization seems like an obvious win until our robot overlords finally figure out how to do it better on their own.


See also: Hal Hodson, The Crowd Can Guess What You Want to Watch or Buy. The New Scientist, 2989, 2 October 2014.

Monday, November 10, 2014

A Formula for Academic Papers: Related Work


The Related Work section of an academic paper is often the section that graduate students like writing the least. But it is also one of the most important sections to nail as the paper heads out for review. The Related Work section serves many purposes, several of which relate directly to reviewing:
  • The person handling the submission will use the referenced papers to identify good reviewers,
  • Reviewers will look at the references to confirm that the submission cites the appropriate work,
  • Everyone will use the section to understand the paper's contributions given the state of existing research, and
  • Future researchers will look to the Related Work section to identify other papers they should read.

Wednesday, October 22, 2014

Data Banks


Each of us individually create a huge amount of data online. Some of this data we create explicitly, such as when we make webpages or public facing profiles, write emails, or author documents. But we also create a lot of data implicitly as a byproduct of our interactions with digital information. These implicit data includes the search queries we issue, the webpages we visit, and our online social networks.

The data we create is valuable. We can use it to understand more about ourselves, and services can use it to personalize our experiences and understand people’s information behavior in general. But despite the fact that we are the ones who create the data, much of it is not actually in our possession. Instead, it resides with companies that provide us with online services in exchange for it. A handful of powerful companies have a monopoly on our data.
Definition of monopoly: the exclusive possession or control of the supply or trade in a commodity or service
Definition of data monopoly: the exclusive possession or control of the supply or trade in an individual’s personal data

Wednesday, October 8, 2014

Help! I'm Sexist!


The research studies I posted last Friday about the role gender plays in the STEM workplace paint a consistent picture: women face significant discrimination. Women are paid (and hired, and tenured) less than men with the same qualifications, and these gender differences are particularly large for parents. While women are often encouraged to address the existing disparities by advocating for themselves (e.g., by being assertive, negotiating, or encouraging diversity), research shows this type of behavior typically incurs a further penalty.

Instead, gender disparities in the STEM workplace are a problem that the entire community must address. Hiring managers need to hire more women. Managers need to promote more women. And peers need to accept diverse communication styles without the lens of gender.

Importantly, however, this does not just mean that MEN need to hire (and promote, and accept) more. Because the other consistent picture that arose from the studies I posted on Friday is that both men AND WOMEN discriminate against women. We all have deep seated biases that contribute to the problem.

Friday, October 3, 2014

Research about Gender in the STEM Workplace


Science Faculty’s Subtle Gender Biases Favor Male Students by Corinne A. Moss-Racusina et al.
In a study with 127 science faculty at research-intensive universities, candidates with identical resumes were more likely to be offered a job and paid more if their name was "John" instead of "Jennifer." The gender of the faculty participating did not impact the outcome.

How Stereotypes Impair Women’s Careers in Science by Ernesto Reuben et al.
Men are much more likely than women to be hired for a math task, even when equally qualified. This happens regardless of the gender of the hiring manager.

Measuring the Glass Ceiling Effect: An Assessment of Discrimination in Academia by Katherine Weisshaar
In computer science, men are significantly more likely to earn tenure than women with the same research productivity. [From a summary]

Wednesday, August 13, 2014

Evidence from Behavior

 


Doug Oard at the Information School at the University of Maryland is teaching an open online course on information retrieval this fall (INST 734). Above is the brief cameo lecture I recorded using Office Mix for the segment on Evidence from Behavior.

Tuesday, July 29, 2014

The #GreatWalk Recap


Cale and I completed our 100 mile #GreatWalk from Bellevue, WA to Great Wolf Lodge. We live-blogged on Twitter as we walked, and I have recorded our tweets on this blog in chronological order to make them easy to read. Thanks for sharing our journey with us!
  • Day 1: We depart!
  • Day 2: A long walk to the airport
  • Day 3: Getting tired and frustrated
  • Day 4: A candy discovery
  • Day 5: Skirting the military base
  • Day 6: A wet and rainy day
  • Day 7: Into the wilderness
  • Day 8: We arrive at Great Wolf!
  • Day 9: A day of rest
  • Day 10: The trip home
Some interesting external links about the adventure:

#GreatWalk: Day 10

[This post includes my tweets (@jteevan) from the tenth day (July 27, 2014) of Cale and my 100 mile walk to Great Wolf!]

#GreatWalk: Day 9

[This post includes my tweets (@jteevan) from the ninth day (July 26, 2014) of Cale and my 100 mile walk to Great Wolf!]

#GreatWalk: Day 8

[This post includes my tweets (@jteevan) from the eighth day (July 25, 2014) of Cale and my 100 mile walk to Great Wolf!]