What Is the Faciloscope?

The Faciloscope is a tool designed to analyze online conversations like the ones we see on social media - in a Facebook post or Twitter thread - or in the comment section of a blog or news story.

The Faciloscope is part of a larger WIDE Research Center project supported by the Institute for Museum & Library Services. You can read more about the Facilitation Project and see some of the original research that the Faciloscope builds upon.

The Faciloscope is designed to be especially useful for moderators or facilitators of online conversations, to help them get a global sense of how the conversation is going, and what if anything might be done to help move the conversation along in a productive way. It might also help a moderator to know when a conversation has fizzled out or is, perhaps, not worth continuing.

Minh-Tam Nguyen & Ian Clark, Faciloscope research assistants!

How Does the Faciloscope Work?

The Faciloscope breaks a conversation down into three basic functional “moves” that participants make that move conversations along. It does this by reading every sentence-sized chunk of the conversation and classifying the results using a machine-learning algorithm trained to recognize the three moves.

Like other types of rhetorical analysis work, the analysis the Faciloscope performs is more concerned with the functional aspects of discourse than the *content* of the discourse. That is, the Faciloscope is not reading and evaluating the truthfulness or accuracy of statements people are making. Rather, it is watching for the kinds of moves people make that have an effect on the overall dynamic of a conversation: moves that can keep it going or shut it down.

What Are The Three Moves the Faciloscope Looks For?

Staging - a staging move is an attempt by a participant to establish the conversation’s ground rules or to help others understand the social circumstances that give rise to the exchange.
“Thank you all for joining our live blogging of the debate!”
“This forum is a great place to ask questions about how to get the best results from your new quantified self device, no matter what make or model it is.”
Inviting - an inviting move is a direct request to an individual or group to participate in the conversation in some way.
“Can the OP offer some additional context with regard to her experience with filing taxes for same-sex couples? It sounds like she has done this before in another state?”
“Feel free to join in! We welcome your comments and questions about today’s launch.”
Evoking - participants who point out a relationship between one participant and another, often by referring to something said previously, are performing an invoking move. These moves can highlight agreement or disagreement. They can also be, on the whole, positive or rather negative in terms of their tone or sentiment.
“As @sciencechick noted earlier, helping young women find a mentor they can trust in the early career phase is especially important in the technology sector.”
“All the fanbois are piling on here, of course. Forget it if your handle sounds like you are a woman and you want to try and offer constructive critique.”

Trying Out the Faciloscope

To start using the Faciloscope, simply paste *plain text* into the text box on the intro screen and hit the “ANALYZE” button.

You’ll get the best results if what you are pasting in includes the comments made by participants, but leaves out the information that is sometimes included automatically such as timestamps or labels. The reason is that the Faciloscope is sensitive to repetition. Any repeated elements will be factored into the analysis, and it may throw off the results if the repeated information is something that a human reader would otherwise ignore.

Remove auto-generated and repetitive text for better results

But...don’t throw out the punctuation, emoticons, or other elements that are contained in a message. The messages our machine-learning algorithm was trained on had these kinds of common elements in them. They are part of the way people communicate on the internet, and so they are valid input for the Faciloscope.

Making Sense of the Results

The Faciloscope offers analytic and descriptive results that require interpretation by human users. Think of these results as a global view of the whole conversation.

Faciloscope results with donut chart & bands visualization highlighted
  1. The donut chart near the top of the results view shows by percentage and by color how the conversation chunks were classified. Staging is typically the most common.
  2. Below the donut chart, we include a “bands” visualization that shows where the moves happen, over time, as the conversation progresses. Here, the colors once again denote the three types of moves. This display allows you to see how well the three moves are distributed over the full length of the interaction. You can the pinch gesture (open and close) to “zoom” the bands viz. This also allows you to see the plain text, which appears just below the bands, that corresponds to the moves.

Below the bands visualization is a transcript of the full session as it was parsed (broken into chunks) and classified by the Facilososcope. This display allows you to see where you may or may not agree with the category assigned by the algorithm, and it allows you to look at particularly interesting moves that could be worth further analysis.

More About the Faciloscope

The purpose of this project, from the beginning, has been to create an iinteractive tool based on an online discussion facilitation worksheet developed by another WIDE research team. You can read moree about their work and see their Facilitation Toolkit here: (http://facilitation.matrix.msu.edu/index.php/download_file/view/43/124/).

Our challenge was to apply machine-learning techniques to produce results similar to what the Facilitation team had done with a team of trained human raters, color-coded transcripts of online discussions in a way that allowed practitioners to compare the dynamics of online discussions with the recommended set of facilitation moves in the Facilitation Toolkit. It was our goal that tool be useful in training and debriefing sessions with facilitators. We also hoped it might be used as formative feedback during longer, ongoing facilitation sessions to assist moderators in guiding online discussions.

How Did We Train the Faciloscope’s Learning Algorithm?

During the Summer of 2014, our research team gathered tens of thousands of participant contributions to online conversations and hand-coded these. Each coding unit was about the size of a sentence and was read and assigned a category by at least two members of the team. This process took about four months.

We calibrated our rating process carefully and achieved a high rate of agreement among our research team. We worked to achieve 95% agreement using Cohen’s Kappa, a statistic that measures agreement among two raters after correcting for chance.

The result of this round of hand-coding was a large corpus of categorized bits of natural language. About 16,000 sentences in all! That corpus is used by the learning algorithm to extract features characteristic of each of our three categories. Exactly which features the algorithm most strongly associates with each category is not precisely clear, but neither are those features that the *human* raters use to place statements into each category!

What we do know is that the algorithm performs almost as well as a human rater. When tested against 20% of the human coded corpus, the Faciloscope achieves a 76% Kappa agreement with human raters.

Using the Faciloscope

This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. We do not store user submitted data, but it is the users' responsibility to safeguard or pre-screen their data to avoid potential security threats.

If you use the results of the Faciloscope in a publication, please cite this work:

Omizo, Ryan, Ian Clark, Minh-Tam Nguyen, & William Hart-Davidson. Faciloscope. East Lansing, MI: N.p., 2016. Web.

Research Team

Ryan Omizo, University of Rhode Island, Project Lead, WIDE Researcher
Bill Hart-Davidson, Michigan State University, Senior Researcher, WIDE Research Center
Ian Clark, Research Assistant, Undergraduate Student in Experience Architecture, MSU
Minh-Tam Nguyen, Ph.D. Student, WIDE Research Assistant & Project Liason/w Facilitation Project

Read More about the Faciloscope Project

"Can an Algorithm Solve Comment Trolling?"
http://www.cjr.org/behind_the_news/comment_moderation_algorithm.php
"The Faciloscope's Goal: Everything in Moderation"
http://www.cal.msu.edu/faciloscope

Read More from the Computational Rhetoric Group (CRG)

"Finding Genre Signals in Academic Writing"
Ryan Omizo & William Hart-Davidson
Journal of Writing Research, 7(3), 485-509
Abstract | PDF | http://dx.doi.org/10.17239/jowr-2016.07.03.08

Acknowledgements

Thanks to Beck Tench for naming the Faciloscope!

Thanks to the WIDE Research Center and to Matrix for their support of the project.

Thanks to the IMLS for their support of the Facilitation Project.

Software Packages Used

The Faciloscope is a Django app and relies on the following software libraries for its text processing and machine learning algorithms:

Bird, S. (2006, July). NLTK: the natural language toolkit. In Proceedings of the COLING/ACL on Interactive presentation sessions (pp. 69-72). Association for Computational Linguistics.

Buitinck, L., Louppe, G., Blondel, M., Pedregosa, F., Mueller, A., Grisel, O., ... & Layton, R. (2013). API design for machine learning software: experiences from the scikit-learn project. arXiv preprint arXiv:1309.0238.

Pedregosa, F., Varoquaux, G., Gramfort, A., Michel, V., Thirion, B., Grisel, O., ... & Vanderplas, J. (2011). Scikit-learn: Machine learning in Python. The Journal of Machine Learning Research, 12, 2825-2830.

Perkins, J. (2010). Python text processing with NLTK 2.0 cookbook. Packt Publishing Ltd.

This project was made possible in part by the Institute of Museum and Library Services grant LG-25-10-0034-10

© 2014 Facilitation Toolbox. All rights reserved.

smmlogo.png mls.png Michigan_State_University_l.png