| 
  • If you are citizen of an European Union member nation, you may not use this service unless you are at least 16 years old.

  • You already know Dokkio is an AI-powered assistant to organize & manage your digital files & messages. Very soon, Dokkio will support Outlook as well as One Drive. Check it out today!

View
 

Charlyne Topic Modeling Test

Page history last edited by Charlyne Sarmiento 10 years, 6 months ago

 

For this practicum, I tried the lesson plan on topic modeling by hand Shawn Graham, Ian Milligan, and Scott Weingart. I first highlighted words in the sample text, the Gettysburg Address, that I saw were related to the word "war."  With a different color, I highlighted words that I saw were related to "governance."  The next step was to have someone do the same thing. After having a friend highlight words that she thought were related to "war" and "governance," I compared how we marked up the texts, which is illustrated below.  Words that are in boldfaced reflect the differences between where we marked our text. As you can see there are not considerable differences. Graham, Milligan, and Weingart point out that this is a process that works to train a computer to read words' semantic meaning in the way we want it to. Otherwise, a computer will only rely on decontextualized words, paying only attention to the alphanumeric text.  It is our job to encode the alphanumeric text, so the computer can understand the semantic meaning of words.

 

Topic modeling seems like a relevant method to work in Writing Studies. There has been a recent debate on whether student writing should be scored by computers, specifically in large-scale assessment of writing for college writing placement exams.  Some universities have even considered using machine scoring in classrooms where there is a high enrollment.  Proponents of this technology argue that machine scoring can give students instant feedback, so they can more easily improve there writing.  The belief is that assigning more writing will give students more opportunities for learning, but this is almost impossible in classrooms where the professor/student ratio would make it impossible to give student substantial feedback. With increasing student enrollment, machine scoring of writing is then leveraged as a solution. However, while an algorithm is programmed into these machines, it is still questionable whether a machine could assess the varied meanings of each student text.  That is, any semantic meaning would be limited from the perspective of the programmer of the scoring machine's algorithm. Some type of topic modeling would then need to be used to by experts in the field as well as the institutions it would be used for.  The university's learning goals and curriculum would then have to be aligned with how the algorithm is evaluating the writing. There have been studies by Les Perlman that found existing machine scoring technologies as only capable of assessing writing based on length of the sentence , syntax, and vocabulary. There is little evidence on how these machines assess content or the rhetorical impact of student writing.  It would interesting to see how topic modeling could be advanced by those in our field to train the technology to reflect the values of writing in our discipline.

 

 

 

 

Comments (0)

You don't have permission to comment on this page.